266 Solutions To Problems From Linear Al

Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

266 Solutions to Problems from

“Linear Algebra” 4th ed., Friedberg, Insel,


Spence

Daniel Callahan
3

c 2016 Daniel Callahan. All rights reserved.


ISBN-13: 978-1533013033

Email the author: editor@starrhorse.com

Previously published as “The Unauthorized Solutions Manual to “Linear Al-


gebra” 4th ed. by Friedberg, Insel, Spence”.
Contents

Chapter 1. Vector Spaces 15


1.1. Section 1.2, #9 15
1.2. Section 1.2, #12 16
1.3. Section 1.3, #3 16
1.4. Section 1.3, #18 17
1.5. Section 1.3, #19 17
1.6. Section 1.3, #20 18
1.7. Section 1.3, #23 18
1.8. Section 1.3, #24 19
1.9. Section 1.3, #30 20
1.10. Section 1.3, #31 21
1.11. Section 1.4, #11 23
1.12. Section 1.4, #13 24
1.13. Section 1.4, #14 24
1.14. Section 1.4, #15 25
1.15. Section 1.4, #16 26
1.16. Section 1.5, #9 26
1.17. Section 1.5, #12 27
1.18. Section 1.5, #13 27
1.19. Section 1.5, #14 28
1.20. Section 1.5, #16 29
1.21. Section 1.5, #20 30
1.22. Section 1.6, #11 30
5
6 CONTENTS

1.23. Section 1.6, #19 31


1.24. Section 1.6, #20 32
1.25. Section 1.6, #21 33
1.26. Section 1.6, #30 34
1.27. Section 1.6, #35 35

Chapter 2. Linear Transformations and Matrices 37


2.1. Section 2.1, #11 37
2.2. Section 2.1, #12 37
2.3. Section 2.1, #13 38
2.4. Section 2.1, #14 38
2.5. Section 2.1, #15 39
2.6. Section 2.1, #16 40
2.7. Section 2.1, #17 41
2.8. Section 2.1, #18 41
2.9. Section 2.1, #19 42
2.10. Section 2.1, #20 43
2.11. Section 2.1, #24 43
2.12. Section 2.1, #25 44
2.13. Section 2.1, #26 45
2.14. Section 2.1, #27 46
2.15. Section 2.1, #28 47
2.16. Section 2.1, #29 47
2.17. Section 2.1, #30 47
2.18. Section 2.1, #31(a,b) 48
2.19. Section 2.1, #32 49
2.20. Section 2.1, #35 50
2.21. Section 2.1, #37 51
2.22. Section 2.1, #38 51
2.23. Section 2.1, #40(a,b) 52
2.24. Section 2.2, #8 52
CONTENTS 7

2.25. Section 2.2, #11 53


2.26. Section 2.2, #13 54
2.27. Section 2.2, #14 54
2.28. Section 2.2, #15 55
2.29. Section 2.2, #16 56
2.30. Section 2.3, #3 56
2.31. Section 2.3, #5 58
2.32. Section 2.3, #6 59
2.33. Section 2.3, #7 59
2.34. Section 2.3, #9 60
2.35. Section 2.3, #11 60
2.36. Section 2.3, #12 61
2.37. Section 2.3, #13 62
2.38. Section 2.3, #14 62
2.39. Section 2.3, #15 63
2.40. Section 2.3, #16(a) 64
2.41. Section 2.4, #4 65
2.42. Section 2.4, #5 65
2.43. Section 2.4, #6 66
2.44. Section 2.4, #7 66
2.45. Section 2.4, #8 67
2.46. Section 2.4, #9 68
2.47. Section 2.4, #10 68
2.48. Section 2.4, #12 69
2.49. Section 2.4, #13 70
2.50. Section 2.4, #15 70
2.51. Section 2.4, #16 71
2.52. Section 2.4, #17 72
2.53. Section 2.4, #20 72
2.54. Section 2.4, #21 73
8 CONTENTS

2.55. Section 2.4, #24 75


2.56. Section 2.4, #25 76
2.57. Section 2.5, #7 77
2.58. Section 2.5, #8 79
2.59. Section 2.5, #9 79
2.60. Section 2.5, #10 80
2.61. Section 2.5, #11 81
2.62. Section 2.5, #12 81
2.63. Section 2.5, #13 82
2.64. Section 2.5, #14 83

Chapter 3. Elementary Matrix Operations & Systems of Linear


Equations 85
3.1. Section 3.2, #3 85
3.2. Section 3.2, #6 (a,b,e only) 85
3.3. Section 3.2, #8 87
3.4. Section 3.2, #14 87
3.5. Section 3.2, #16 88
3.6. Section 3.2, #19 89
3.7. Section 3.2, #21 90
3.8. Section 3.2, #22 90
3.9. Section 3.3, #2g 91
3.10. Section 3.3, #3g 92
3.11. Section 3.3, #6 93
3.12. Section 3.3, #9 93
3.13. Section 3.3, #10 94
3.14. Section 3.4, #3 94
3.15. Section 3.4, #10 95
3.16. Section 3.4, #14 96
3.17. Section 3.4, #15 96
CONTENTS 9

Chapter 4. Determinants 97
4.1. Section 4.1, #9 97
4.2. Section 4.2, #23 97
4.3. Section 4.2, #24 98
4.4. Section 4.2, #25 99
4.5. Section 4.2, #29 100
4.6. Section 4.2, #30 100
4.7. Section 4.3, #9 101
4.8. Section 4.3, #10 101
4.9. Section 4.3, #11 101
4.10. Section 4.3, #12 102
4.11. Section 4.3, #13(a) 102
4.12. Section 4.3, #15 103
4.13. Section 4.3, #16 103
4.14. Section 4.3, #17 103
4.15. Section 4.3, #20 104
4.16. Section 4.3, #21 105

Chapter 5. Diagonalization 107


5.1. Section 5.1, #2(b,d) 107
5.2. Section 5.1, #3(b,d) 107
5.3. Section 5.1, #4(b) 109
5.4. Section 5.1, #5 110
5.5. Section 5.1, #6 111
5.6. Section 5.1, #7 112
5.7. Section 5.1, #8(a,b) 114
5.8. Section 5.1, #9 115
5.9. Section 5.1, #10 115
5.10. Section 5.1, #11 116
5.11. Section 5.1, #12 117
5.12. Section 5.1, #13 118
10 CONTENTS

5.13. Section 5.1, #14 119


5.14. Section 5.1, #15 120
5.15. Section 5.1, #19 121
5.16. Section 5.1, #20 121
5.17. Section 5.1, #21(a) 122
5.18. Section 5.1, #22 123
5.19. Section 5.1, #24 125
5.20. Section 5.1, #25 126
5.21. Section 5.2, #2(b,d) 126
5.22. Section 5.2, #3(a,b,c) 128
5.23. Section 5.2, #4 131
5.24. Section 5.2, #5 131
5.25. Section 5.2, #7 132
5.26. Section 5.2, #9 133
5.27. Section 5.2, #10 133
5.28. Section 5.2, #12 134
5.29. Section 5.2, #22 135
5.30. Section 5.4, #3 135
5.31. Section 5.4, #4 136
5.32. Section 5.4, #5 136
5.33. Section 5.4, #6(b,d) 137
5.34. Section 5.4, #7 137
5.35. Section 5.4, #11 138
5.36. Section 5.4, #12 138
5.37. Section 5.4, #23 139
5.38. Section 5.4, #24 139
5.39. Section 5.4, #27 140
5.40. Section 5.4, #28 141
5.41. Section 5.4, #29 142
5.42. Section 5.4, #30 142
CONTENTS 11

5.43. Section 5.4, #34 143


5.44. Section 5.4, #35 143

Chapter 6. Inner Product Spaces 145


6.1. Section 6.1, #5 145
6.2. Section 6.1, #6 147
6.3. Section 6.1, #7 148
6.4. Section 6.1, #8(c) 149
6.5. Section 6.1, #10 149
6.6. Section 6.1, #12 150
6.7. Section 6.1, #13 150
6.8. Section 6.1, #17 151
6.9. Section 6.1 #19 152
6.10. Section 6.1, #22(a) 153
6.11. Section 6.1, #26 155
6.12. Section 6.1, #28 156
6.13. Section 6.1, #29 158
6.14. Section 6.2, #2(a) 159
6.15. Section 6.2, #3 159
6.16. Section 6.2, #6 160
6.17. Section 6.2, #7 160
6.18. Section 6.2, #13 161
6.19. Section 6.2, #15(a) 162
6.20. Section 6.2, #19(c) 162
6.21. Section 6.2, #20(c) 163
6.22. Section 6.2, #21 163
6.23. Section 6.2, #22 164
6.24. Section 6.3, #4 165
6.25. Section 6.3, #6 166
6.26. Section 6.3, #7 166
6.27. Section 6.3, #8 166
12 CONTENTS

6.28. Section 6.3, #11 167


6.29. Section 6.3, #13 168
6.30. Section 6.3, #18 169
6.31. Section 6.3, #22(c) 169
6.32. Section 6.4, #2(b,d) 169
6.33. Section 6.4, #4 170
6.34. Section 6.4, #5 170
6.35. Section 6.4, #6 171
6.36. Section 6.4, #7 173
6.37. Section 6.4, #8 174
6.38. Section 6.4, #11 175
6.39. Section 6.4, #12 176
6.40. Section 6.4, #17(a) 177
6.41. Section 6.4, #19 178
6.42. Section 6.5, #2 179
6.43. Section 6.5, #3 179
6.44. Section 6.5, #6 179
6.45. Section 6.5, #7 180
6.46. Section 6.5, #17 180
6.47. Section 6.5, #18 181
6.48. Section 6.5, #31 181
6.49. Section 6.6, #4 183
6.50. Section 6.6, #6 184

Chapter 7. Canonical Forms 185


7.1. Section 7.1, #4 185
7.2. Section 7.1, #5 186
7.3. Section 7.1, #8 187
7.4. Section 7.1, #12 187
7.5. Section 7.2, #11 188
7.6. Section 7.2, #12 188
CONTENTS 13

7.7. Section 7.2, #20 189


7.8. Section 7.3, #6 190
CHAPTER 1

Vector Spaces

1.1. Section 1.2, #9

Prove Corollaries 1 and 2 of Theorem 1.1 and Theorem 1.2(c).

P ROOF. Let V be a vector space.

Corollary 1: “The vector 0 described in (VS 3) is unique”.

Suppose 0′ is also a zero vector. If a, b ∈ V such that a + b = 0, then 0 =


a + b = 0′ , and so 0 = 0′ .

Corollary 2: “The vector y described in (VS 4) is unique.”

Suppose that there exists x, y, z ∈ V such that y + x = 0 and z + x = 0. Then


we have that y + x = z + x, and by Theorem 1.1, y = z.

Theorem 1.2(c): “a0 = 0 for each a ∈ F.”

By (VS 8), (VS 3), and (VS 1), it follows that

a0 + a0 = a(0 + 0) = a0 = a0 + 0

By Theorem 1.1, a0 = 0. 

15
16 1. VECTOR SPACES

1.2. Section 1.2, #12

A real-valued function f defined on the real line is called an even func-


tion if f (−t) = f (t) for each real number t. Prove that the set of even
functions defined on the real line with the operations of addition and
scalar multiplication defined in Example 3 is a vector space.

P ROOF. (VS 1), (VS 2), (VS 5), (VS 6), (VS 7), and (VS 8) are obvious.

Let z(t) be the zero function. Then z(t) = 0 = z(−t), and so z is an even
function and (VS 3) is satisfied.

Define g(t) = g(−t) = − f (t). Then g is an even, real function and (VS 4) is
satisfied. 

1.3. Section 1.3, #3

Prove that (aA+bB)t = aAt +bBt for any A, B ∈ Mn×n (F) and any a, b ∈ F.

P ROOF. Let A, B ∈ Mn×n (F) and a, b ∈ F be chosen arbitrarily. Con-


sider the matrix C = aA + bB. Then each entry of C can be written (C)i j
where 1 ≤ i ≤ n, 1 ≤ j ≤ n, and (C)i j = aAi j + bBi j . It follows that

(Ct )i j = aA ji + bB ji = a(At )i j + b(Bt )i j

Hence, (aA + bB)t = aAt + bBt . 


1.5. SECTION 1.3, #19 17

1.4. Section 1.3, #18

Prove that a subset W of a vector space V is a subspace of V iff 0 ∈ W


and ax + y ∈ W whenever a ∈ F and x, y ∈ W .

P ROOF. Let W be a subset of a vector space V .

Suppose W is also a subspace of V . Then if a ∈ F and x, y ∈ W where our


choices of a, x, y are arbitrary, by Theorem 1.3(a), 0 ∈ W . Also, by Theorem
1.3(c), ax ∈ W ; furthermore, by Theorem 1.3(b), ax + y ∈ W .

Now suppose that if a ∈ F and x, y ∈ W , then 0 ∈ W and ax + y ∈ W . We


wish to show that W is a subspace. Since V is a vector space, 1 ∈ F, and it
follows that x + y ∈ W . Also, notice that since 0 ∈ W , ax + 0 = ax ∈ W .

Thus, 0 ∈ W , x + y ∈ W whenever x, y ∈ W , and ax ∈ W whenever x ∈ W and


a ∈ F. By Theorem 1.3, W is a vector space of V . 

1.5. Section 1.3, #19


S
Let W1 and W2 be subspaces of a vector space V . Prove that W1 W2 is a
subspace of V if and only if W1 ⊆ W2 or W2 ⊆ W1 .

S
P ROOF. “⇐” If W2 ⊆ W1 , then W1 W2 = W1 . Since W1 is a subspace
S
of V , W1 W2 is also a subspace of V . A similar result follows if W1 ⊆ W2 ,
mutatis mutandis.
S
“⇒” Suppose that W1 W2 is a subspace of V and neither W1 * W2 nor W1 *
T
W2 . Partition W1 , W2 such that G = W1 W2 , H = W2 \W1 , and K = W1 \W2 .
Since W1 , W2 are subspaces of V , 0 ∈ G, so G is nonempty.

Let x ∈ G, y ∈ K, and z ∈ H. Since W1 is a subspace, x − y ∈ W1 . Similarly,


since W2 is a subspace, x − z ∈ W2 .
18 1. VECTOR SPACES

Suppose that y − z ∈ W1 . Then (x − y) + (y − z) = x − z; however, x − y ∈ W1 ,


y − z ∈ W1 , but x − z ∈ W2 . This is a contradiction, since W1 is a subspace of
V.

Now suppose that y − z ∈ W2 . Since W2 is a subspace, we also have z − y ∈


W2 . Then (x − z) + (z − y) = x − y; however, x − z ∈ W2 , z − y ∈ W2 , but
x − y ∈ W1 . This is also a contradiction, since W2 is a subspace of V .

Hence it is not the case that neither W1 * W2 nor W1 * W2 ; instead, we have


that either W1 ⊆ W2 or W2 ⊆ W1 , as required. 

1.6. Section 1.3, #20

Prove that if W is a subspace of a vector space V and w1 , w2 , ..., wn ∈ W ,


then a1 w1 + a2 w2 + ... + an wn ∈ W for any scalars a1 , a2 , ..., an .

P ROOF. Suppose the above and that a1 , a2 , ..., an ∈ F. By Theorem


1.3(c), ai wi ∈ W for i = 1, ..., n. And by Theorem 1.3(b), a1 w1 + a2 w2 ∈ W .
By n − 1 repeated applications of Theorem 1.3(b), we have that a1 w1 +
a2 w2 + ... + an wn ∈ W . 

1.7. Section 1.3, #23

Let W1 and W2 be subspaces of a vector space V .

(a) Prove that W1 +W2 is a subspace of V that contains both W1 and W2 .

(b) Prove that the subspace of V that contains both W1 and W2 must also
contain W1 +W2 .
1.8. SECTION 1.3, #24 19

P ROOF. (a) Since W1 ,W2 are subspaces of V , 0 is a member of each set,


and so 0 ∈ W1 +W2 .

Suppose ax ∈ W1 and y ∈ W2 . Then ax = a(x + 0) ∈ W1 +W2 and y = 0 + y ∈


W1 + W2 ; hence, ax + y ∈ W1 + W2 . By 1.3, #18, W1 + W2 is a subspace of
V . Also by the above, it is clear that W1 ⊆ W1 +W2 and W2 ⊆ W1 +W2 since
a, x, y were arbitrary elements.

(b) Let U ⊆ V be a subspace of V such that W1 ⊆ U and W2 ⊆ U. We wish


to show that W1 +W2 ⊆ U.

Suppose ax + y ∈ W1 + W2 where ax ∈ W1 and y ∈ W2 . It follows from the


above that ax ∈ U and y ∈ U. Since U is a subspace, ax + y ∈ U, and so
W1 +W2 ⊆ U. 

For #24, see the definition of a direct sum on p.22.

1.8. Section 1.3, #24

Show that F n is the direct sum of the subspace

W1 = {(a1 , a2 , ..., an ) ∈ F n : an = 0}

and
W2 = {(a1 , a2 , ..., an ) ∈ F n : a1 = a2 = ... = an−1 = 0}

P ROOF. W1 is a subspace of F n since:

1) If a1 = a2 = ... = an−1 = 0, then 0 ∈ W1 .

2) If (a1 , a2 , ..., 0), (b1 , b2 , ..., 0) ∈ W1 , then ,(a1 + b1 , a2 + b2 , ..., 0) ∈ W1 .

3) If k ∈ F and (a1 , a2 , ..., 0) ∈ W1 , then (ka1 , ka2 , ..., 0) ∈ W1 .


20 1. VECTOR SPACES

W2 is also a subspace of F n since:


1) If an = 0, then 0 ∈ W2 .
2) If (0, 0, ..., b), (0, 0, ..., c) ∈ W2 , then (0, 0, ..., b + c) ∈ W2 .
3) If k ∈ F and (0, 0, ..., b) ∈ W2 ), then (0, 0, ..., kb) ∈ W2 .
T T
Suppose y ∈ W1 W2 . Then a1 = a2 = ... = an = 0, and so y = 0 (or W1 W2 =
{0}).
Suppose (c1 , c2 , ..., cn ) ∈ F n . Then (c1 , c2 , ..., cn ) = (c1 , c2 , ..., 0)+(0, 0, ..., cn )
where (c1 , c2 , ..., 0) ∈ W1 and (0, 0, ..., cn ) ∈ W2 . Hence, F n = W1 + W2 . By
p.22, F n is the direct sum of W1 and W2 . 

1.9. Section 1.3, #30

Let W1 and W2 be subspaces of a vector space V . Prove that V is the


direct sum of W1 and W2 if and only if each vector in V can be uniquely
written as x1 + x2 where x1 ∈ W1 and x2 ∈ W2 .

P ROOF. Let W1 and W2 be subspaces of a vector space V .


Suppose that there exists some y ∈ V such that y = x1 + x2 = t1 + t2 where
x1 ,t1 ∈ W1 , x2 ,t2 ∈ W2 , x1 6= t1 , and x2 6= t2 ; that is, there exists a vector in V
that cannot be written uniquely from elements in W1 and W2 . Since 0 ∈ V ,
we have that:
0 = y − y = (x1 + x2 ) − (t1 + t2 ) = (x1 − t1 ) + (x2 − t2 ) = k1 + k2 ∈ V

where k1 ∈ W1 , k2 ∈ W2 . But since 0 = k1 + k2 , k1 = −k2 , and so k2 ∈ W1 ,


T
k1 ∈ W2 . It follows that W1 W2 = {0, k1 , k2 }, and V is not a direct sum of
subspaces W1 and W2 .
Now suppose that all vectors in V can be written uniquely from elements in
W1 and W2 ; that is, W1 + W2 = V . Since V is a vector space and W1 , W2 are
1.10. SECTION 1.3, #31 21

subspaces of V , 0 ∈ W1 ,W2 and so 0 + 0 = 0 ∈ V ; it follows that the only sum


of elements from W1 and W2 that equal 0 are 0 ∈ W1 ,W2 . It follows that if
a ∈ W1 , then −a ∈
/ W2 ; i.e., W1 and W2 share no other additive inverses. How-
ever, if a subspace contains a vector, then it must also contain the vector’s
additive inverse; it follows that W1 and W2 share no other common vectors,
T T
i.e., W1 W2 = {0}. Thus we have that W1 W2 = {0} and W1 + W2 = V ,
which is the definition of a direct sum.
Hence, the proof. 

1.10. Section 1.3, #31

Let W be a subspace of a vector space V over a field F. For any v ∈ V the


set {v +W } = {v + w : w ∈ W } is called the coset of W containing v. (We
denote this coset by v +W rather than {v +W }.)

(1) Prove that v +W is a subspace of V iff v ∈ W .


(2) Prove that v1 +W = v2 +W iff v1 − v2 ∈ W .
(3) Addition and scalar multiplication by scalars of F can be de-
fined in the collection S = {v + W : v ∈ V } of all cosets of W as
on p.23. Prove that these operations on are well-defined.
(4) Prove that the set S = {v + W : v ∈ W } of all cosets of W is a
vector space with the operations defined on p. 23.

P ROOF. (1) Suppose that v + W is a subspace of V . Since W is a sub-


space, 0 ∈ W , and so v + 0 = v ∈ v +W .
Now suppose that v ∈ W . Since W is a subspace, it follows that −v ∈ W , and
so v − v = 0 ∈ v +W . This fulfills Theorem 1.3(a).
If r, s ∈ W , then v + r, v + s ∈ v +W , and (v + r) + (v + s) = v + (r + s + v).
Since W is a subspace and r, s, v ∈ W , r + s + v ∈ W , and so v + (r + s + v) ∈
v +W . This fulfills Theorem 1.3(b).
22 1. VECTOR SPACES

Finally, if c ∈ F,

c(v + r) = cv + cr = v + ((c − 1)v + cr) ∈ v +W

This fulfills Theorem 1.3(c). Hence, v +W is a subspace.

(2) Suppose v1 +W = v2 +W and v1 = v2 . Then v1 − v2 = 0 ∈ W .

Now suppose that v1 + W = v2 + W and v1 6= v2 . Since this equality is set


equality, and each element of the LHS must contain v1 plus an element of
W, and each element of the RHS must contain v2 plus an element of W , it
follows that v1 + v2 ∈ v1 + W , and so v2 ∈ W . Similarly, since v1 + v2 ∈
v2 +W , it follows that v1 ∈ W . Hence v1 − v2 ∈ W , since W is a subspace.

Finally, suppose that v1 − v2 ∈ W . Consider the coset v2 +W . Clearly, v2 =


v2 + 0 ∈ v2 +W . Also, v2 + (v1 − v2 ) = v1 ∈ v2 +W . Hence, v1 , v2 ∈ v2 +W .
Similarly, v1 , v2 ∈ v1 +W . It follows that if v1 + w ∈ v1 +W , then v1 + w ∈
v2 +W ; or v1 +W ⊆ v2 +W . Similarly, we can show that v2 +W ⊆ v1 +W ,
or v1 +W = v2 +W .

Thus we have that v1 +W = v2 +W iff v1 − v2 ∈ W .

(3) Suppose v1 + W = v′1 + W and v2 + W = v′2 + W . We wish to show that


(v1 +W ) + (v2 +W ) = (v′1 +W ) + (v′2 +W ) and a(v1 +W ) = a(v′1 +W ).

By (2) above, v1 + W = v′1 + W iff v1 − v′1 ∈ W and v2 + W = v′2 + W iff


v2 − v′2 ∈ W . It follows that (v1 − v′1 ) + (v2 − v′2 ) ∈ W , since W is a subspace
of V ; or, (v1 + v2 ) − (v′1 + v′2 ) ∈ W . Again by (2), we have that (v1 + v2 ) +
W = (v′1 + v′2 ) + W . By the operations on p. 23, this can be rewritten as
(v1 +W ) + (v2 +W ) = (v′1 +W ) + (v′2 +W ).

We have shown above that v1 − v′1 ∈ W . If a ∈ F, then a(v1 − v′1 ) = av1 −


av′1 ∈ W . By (2), av1 + W = av′1 + W ; by the operations on p. 23, a(v1 +
W ) = a(v′1 +W ).
1.11. SECTION 1.4, #11 23

(4) We wish to prove that the set S = {v +W : v ∈ V } of all cosets of W is a


vector space with the operations defined on p. 23. Suppose u, v, w ∈ V and
a, b ∈ F.
(VS 1) (v+W )+(w+W ) = (v+w)+W = (w+v)+W = (w+W )+(v+W )
(VS 2) [(u +W ) + (v +W )] + (w +W ) = [(u + v) +W ] + (w +W ) = (u + v +
w) +W = (u +W ) + [(v + w) +W ] = (u +W ) + [(v +W ) + (w +W )]
(VS 3) (v +W ) + (0 +W ) = (v + 0) +W = v +W Thus the zero element of
S is 0 +W .
(VS 4) (v +W ) + (−v +W ) = (v − v) +W = 0 +W
(VS 5) 1(v +W ) = 1 · v +W = v +W
(VS 6) (ab)(v +W ) = (a)(b)(v +W ) = (a)(b(v +W )) = (a)(bv +W )
(VS 7) a((v + W ) + (w + W )) = a((v + w) + W ) = a(v + w) + W = (av +
aw) +W = (av +W ) + (aw +W )
(VS 8) (a + b)(v +W ) = (a + b)v +W = (av + bv) +W = (av +W ) + (bv +
W)
Hence S with the above operations is the quotient space of V modulo W , or
V /W . 

1.11. Section 1.4, #11

Prove that span({x}) = {ax : a ∈ F} for any vector x in a vector space.


Interpret this result geometrically in R3 .

P ROOF. By definition, span({x}) is the set of all linear combinations


of x. Then if a ∈ F, ax ∈ span({x}). Since ax ∈ {ax : a ∈ F}, span({x}) ⊆
{ax : a ∈ F}. And if ax ∈ {ax : a ∈ F}, ax ∈ span({x}), and so {ax : a ∈
F} ⊆ span({x}), or span({x}) = {ax : a ∈ F}.
24 1. VECTOR SPACES

In R3 , span({x}) is a line that passes through the origin. 

1.12. Section 1.4, #13

Show that if S1 and S2 are subsets of a vector space V such that S1 ⊆ S2 ,


then span(S1 ) ⊆ span(S2 ). In particular, if S1 ⊆ S2 and span(S1 ) = V ,
then span(S2 ) = V .

P ROOF. Suppose S1 ⊆ V , S2 ⊆ V , and S1 ⊆ S2 . We wish to show that


span(S1 ) ⊆ span(S2 ).

Suppose x, y ∈ S1 . Then all linear combinations of x, y are members of


span(S1 ). But if x, y ∈ S1 , then x, y ∈ S2 , and all linear combinations of x, y
are members of span(S2 ). Since x, y are arbitrary members of S1 , span(S1 ) ⊆
span(S2 ).

Now also suppose that span(S1 ) = V . Then by the above, V ⊆ span(S2 ).


Since S2 ⊆ V by hypothesis, span(S2 ) ⊆ V by Theorem 1.5; or, span(S2 ) =
V. 

1.13. Section 1.4, #14

Show that if S1 and S2 are arbitrary subsets of a vector space V , then


S
span(S1 S2 ) = span(S1 ) + span(S2 ).

S
P ROOF. Suppose x ∈ S1 ⊆ V and y ∈ S2 ⊆ V . Then x, y ∈ S1 S2 , and
S
it follows that all linear combinations of x, y are members of span(S1 S2 ).
Rewrite any linear combinations of x, y such that we have (x terms) + (y
terms). Then the left-hand parentheses contain elements from span(S1 ), and
the right-hand parentheses contain elements from span(S2 ); or, the whole is
1.14. SECTION 1.4, #15 25

an element from span(S1 ) + span(S2 ). Since x, y are arbitrary elements, we


S
have that span(S1 S2 ) ⊆ span(S1 ) + span(S2 ).

Now suppose z ∈ span(S1 ) + span(S2 ). Then z is a sum of a linear com-


bination of elements in S1 and of a linear combination of elements in S2 ;
that is, z is a linear combination of elements in S1 and S2 . It follows that
S S
z ∈ span(S1 S2 ), or span(S1 ) + span(S2 ) ⊆ span(S1 S2 ).
S
Hence, span(S1 S2 ) = span(S1 ) + span(S2 ). 

1.14. Section 1.4, #15

T
Let S1 and S2 be subsets of a vector space V . Prove that span(S1 S2 ) ⊆
T T T
span(S1 ) span(S2 ). Give one example in which span(S1 S2 ) and span(S1 ) span(S2 )
are equal and one in which they are unequal.

T
P ROOF. Suppose v, x, y ∈ S1 and v, x, z ∈ S2 . Then v, x ∈ S1 S2 , and so
T
av + bx ∈ span(S1 S2 ). Now av + bx + cy ∈ span(S1 ) and av + bx + dz ∈
T
span(S2 ). Then span(S1 ) span(S2 ) will include all vectors of the form
T T
av+bx+cy+dz where c = d = 0. Hence, span(S1 S2 ) ⊆ span(S1 ) span(S2 ).


Equality: S1 = F, S2 = {0}

Inequality: S1 = {(1, 0), (0, 1)} ∈ R2 , S2 = {(1, 1)} ∈ R2


26 1. VECTOR SPACES

1.15. Section 1.4, #16

Let V be a vector space and S a subset of V with the property that when-
ever v1, v2 , ..., vn ∈ S and a1 v1 + a2 v2 + ... + an vn = 0, then a1 = a2 = ... =
an = 0. Prove that every vector in the span of S can be uniquely written
as a linear combination of vectors in S.

P ROOF. If v1, v2 , ..., vn ∈ S , then a1 v1 + ... + an vn ∈ span(S). Further-


more, if a1 v1 + a2 v2 + ... + an vn = 0, then a1 = a2 = ... = an = 0. Sup-
pose z ∈ span(S) such that z = a1 v1 + ... + an vn = b1 vn + ... + bn vn . Then
(a1 − b1 )v1 + ... + (an − bn )vn = 0, and by hypothesis, ai − bi = 0 for i =
1, ..., n; thus, ai = bi for each i. Hence, the proof. 

1.16. Section 1.5, #9

Let u and v be distinct vectors in a vector space V . Show that {u, v} is


linearly dependent iff u or v is a multiple of the other.

P ROOF. Suppose {u, v} is linearly dependent. Then au + bv = 0 when


both a, b are not zero. Suppose a 6= 0. Then u = − ab v, and so u is a multiple
of v.

Now suppose u = av. Then u − av = 0 where the coefficient of u is 1. By the


Definition on p. 36, {u, v} is linearly dependent. 

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy