0% found this document useful (0 votes)
6 views

Unit-1

The document introduces vector spaces, specifically ℝⁿ, explaining the concept of position vectors and their algebraic structure through definitions and examples. It discusses operations such as addition and scalar multiplication, highlighting properties like commutativity and associativity. Additionally, it generalizes the concept of vector spaces and defines fields, illustrating how various mathematical structures can exhibit similar properties.

Uploaded by

S H
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Unit-1

The document introduces vector spaces, specifically ℝⁿ, explaining the concept of position vectors and their algebraic structure through definitions and examples. It discusses operations such as addition and scalar multiplication, highlighting properties like commutativity and associativity. Additionally, it generalizes the concept of vector spaces and defines fields, illustrating how various mathematical structures can exhibit similar properties.

Uploaded by

S H
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 125

1.

2
Vector Space ℝ𝑛

Introduction to vectors
The locations of points in a plane are usually discussed in
terms of a coordinate system. For example, in Figure below,
the location of each point in the plane can be described
using a rectangular coordinate system. The point 𝐴 is the
point (5,3).

Furthermore, 𝐴 is a certain distance in a certain direction


from the origin 0,0 . The distance and direction are
characterized by the length and direction of the line
segment from the origin 𝑂 to 𝐴. We call such a directed line
segment a position vector and denote it by 𝑂𝐴. 𝑂 is called
the initial point of 𝑂𝐴, and 𝐴 is called the terminal point.
There are thus two ways of interpreting (5,3); it defines the
location of a point in a plane, and it also defines the
position vector𝑂𝐴.

EXAMPLE 1:

Sketch the position vectors 𝑂𝐴 = 4,1 , 𝑂𝐵 = −5, −2 , and


𝑂𝐶 = −3,4 . See the figure below.

Denote the collection of all ordered pairs of real numbers


by ℝ2 . Note the significance of “ordered” here; for example,
(5,3) is not the same vector (3,5). The order is significant.
These concepts can be extended to arrays consisting of
three real numbers, such as (2,4,3), which can be
interpreted in two ways as the location of a point in three
space relative to an 𝑥𝑦𝑧 coordinate system, or as a position
vector. These interpretations are illustrated in the figure
below. We shall denote the set of all ordered triples of real
numbers by ℝ3 .
We now generalize these concepts with the following
definition.
DEFINITION: Let (𝑢1 , 𝑢2 , … . . 𝑢𝑛 ) be a sequence of 𝑛 real
numbers. The set of all such sequences is called n-space
and is denoted ℝ𝑛 .
𝑢1 is the first component of 𝑢1 , 𝑢2 , … . . 𝑢𝑛 . 𝑢2 is the second
component and so on.
EXAMPLE 2: ℝ4 is the collection of all sets of four ordered
3
real numbers. For example, (1,2,3,4) and (−1, , 0,5) are
4
elements of ℝ .
4

ℝ5 is the collection of all sets of five ordered real


7
numbers. For example, −1,2,0, , 9 is in this collection.
8
DEFINITION: Let 𝑢 = 𝑢1 , … . . 𝑢𝑛 and 𝑣 = 𝑣1 , … . . 𝑣𝑛 be two
elements of ℝ𝑛 . We say that 𝑢 and 𝑣 are equal if 𝑢1 =
𝑣1 , … 𝑢𝑛 = 𝑣𝑛 . Thus two elements of ℝ𝑛 are equal if their
corresponding components are equal.

Let us now develop the algebraic structure for ℝ𝑛 .


DEFINITION: Let 𝑢 = 𝑢1 , … . . 𝑢𝑛 and 𝑣 = 𝑣1 , … . . 𝑣𝑛 be
elements of ℝ𝑛 and let c be a scalar. Addition and scalar
multiplication are performed as follows.
Addition: 𝑢 + 𝑣 = 𝑢1 + 𝑣1 , … 𝑢𝑛 + 𝑣𝑛
Scalar multiplication: 𝑐𝑢 = 𝑐𝑢1 , … . . 𝑐𝑢𝑛
To add two elements of ℝ𝑛 , we add corresponding
components. To multiply an element of ℝ𝑛 by a scalar, we
multiply every component by that scalar. Observe that the
resulting elements are in ℝ𝑛 . We say that ℝ𝑛 is closed under
addition and under scalar multiplication.
ℝ𝑛 with operations of component wise addition and
scalar multiplication is an example of a vector space, and
its elements are called vectors.
We shall henceforth in this course interpret ℝ𝑛 to be a
vector space.
We now give example to illustrate geometrical
interpretations of these vectors and their operations.
EXAMPLE 3: This example gives a geometrical
interpretation of vector addition. Consider the sum of the
vectors 4,1 and 2,3 . We get
4,1 + 2,3 = 6,4
In the figure below, we interpret these vectors as position
vectors. Construct the parallelogram having the vectors
4,1 and 2,3 as adjacent sides. The vector 6,4 , the sum,
will be the diagonal of the parallelogram.

In general, if 𝑢 and 𝑣 are vectors in the same vector space,


then 𝑢 + 𝑣 is the diagonal of the parallelogram defined by 𝑢
and 𝑣. See the figure below. This way of visualizing vector
addition is useful in all vector spaces.
EXAMPLE 4: This example gives a geometrical
interpretation of scalar multiplication. Consider the scalar
multiple of the vectors 3,2 by 2. We get
2 3,2 = 6,4
Observe in the figure below that (6,4) is a vector in the
same direction as (3,2) and 2 times it in length.

The direction will depend upon the sign of the scalar. The
general result is as follows. Let 𝑢 be a vector and 𝑐 a scalar.
The direction of 𝑐𝑢 will be the same as the direction of 𝑢 if
𝑐 > 0, the opposite direction to 𝑢 if 𝑐 < 0. The length of 𝑐𝑢 is
𝑐 times the length of 𝑢. See the figure below.

Zero vector:
The vector (0,0, … ,0) having 𝑛 zero components, is called the
zero vector of ℝ𝒏 and is denoted by 0. For example, (0,0,0)
is zero vector of ℝ3 . We shall find that zero vectors play a
central role in the development of vector spaces.
Negative Vector:

The vector (−1)𝑢 is written – 𝒖 and is called the negative of


𝒖. It is a vector that has the same magnitude as 𝒖, but lies
in the opposite direction to 𝒖.
Subtraction:
Subtraction is performed on elements of ℝ𝒏 by subtracting
corresponding components. For example, in ℝ3 ,
5,3, −6 − 2,1,3 = (3,2, −9)
Observe that this is equivalent to
5,3, −6 + −1 2,1,3 = 3,2, −9
Thus subtraction is not new operation on ℝ𝑛 , but a
combination of addition and scalar multiplication by −1.
We have only two independent operations on ℝ𝑛 , namely
addition and scalar multiplication.
We now discuss some of the properties of vector addition
and scalar multiplication. The properties are similar to
those of matrices.

THEOREM:
Let 𝒖, 𝒗 and 𝒘 be vectors in ℝ𝑛 and let 𝑐 and 𝑑 be scalars.
(a) 𝒖 + 𝒗 = 𝒗 + 𝒖
(b) 𝒖 + (𝒗 + 𝒘) = (𝒖 + 𝒗) + 𝒘
(c) 𝒖 + 𝟎 = 𝟎 + 𝒖 = 𝒖
(d) 𝒖 + (−𝒖) = 𝟎
(e) 𝒄 (𝒖 + 𝒗) = 𝒄𝒖 + 𝒄𝒗
(f) (𝒄 + 𝒅) 𝒖 = 𝒄𝒖 + 𝒅𝒖
(g) 𝒄 (𝒅𝒖) = (𝒄𝒅)𝒖
(h) 𝟏𝒖 = 𝒖
These results are verified by writing the vectors in terms of
components and using the definitions of vector addition
and scalar multiplication, and the properties of real
numbers. We give the proofs of (a) and (e).
𝒖 + 𝒗 = 𝒗 + 𝒖:
Let 𝑢 = 𝑢1 , … . . 𝑢𝑛 and 𝑣 = 𝑣1 , … . . 𝑣𝑛 Then
𝑢 + 𝑣 = 𝑢1 , … . . 𝑢𝑛 + 𝑣1 , … . . 𝑣𝑛
= 𝑢1 + 𝑣1 , … 𝑢𝑛 + 𝑣𝑛
= 𝑣1 + 𝑢1 , … 𝑣𝑛 + 𝑢𝑛
=𝑣+𝑢
𝒄(𝒖 + 𝒗) = 𝒄𝒖 + 𝒄𝒗:
𝑐 𝒖 + 𝒗 = 𝑐 𝑢1 , … . . 𝑢𝑛 + 𝑣1 , … . . 𝑣𝑛
= 𝑐( 𝑢1 + 𝑣1 , … 𝑢𝑛 + 𝑣𝑛 )
= 𝑐 𝑢1 + 𝑣1 , … , 𝑐 𝑢𝑛 + 𝑣𝑛

= 𝑐𝑢1 + 𝑐𝑣1 , … 𝑐𝑢𝑛 + 𝑐𝑣𝑛


= 𝑐𝑢1 , … . . , 𝑐𝑢𝑛 + 𝑐𝑣1 , … . . , 𝑐𝑣𝑛
= 𝑐 𝑢1 , … . . , 𝑢𝑛 + 𝑐 𝑣1 , … . . , 𝑣𝑛
= 𝑐𝑢 + 𝑐𝑣
Some of the above properties can be illustrated
geometrically. The commutative property of vector addition
is illustrated in the figure below. Note that we get the same
diagonal to the parallelogram whether we add the vectors
in the order 𝑢 + 𝑣 or in the order 𝑣 + 𝑢. One implication of
part (b) above is that we can write certain algebraic
expressions involving vectors, without parentheses.

General Vector Space


In this section, we generalize the concept of the vector
space ℝ𝑛 . We examine the underlying algebraic structure of
ℝ𝑛 . Any set with this structure has the same mathematical
properties and will be called a vector space. The results
that were developed for the vector space ℝ𝑛 will also apply
such sets. We will, for example, find that certain spaces of
functions have the same mathematical properties as the
vector space ℝ𝑛 . Similarly the scalar set has the algebraic
structure of the real number set and will be called a field.
Precise definitions are as follows.
Definition: Let 𝐹 be a set having at least two elements 0𝐹
and 1𝐹 (0𝐹 ≠ 1𝐹 ) together two operations ‘. ’ (multiplication)
and ‘ + ’ (addition). A field (𝐹, +, . ) is a triplet satisfying the
following axioms:
For any three elements 𝑎, 𝑏, 𝑐 ∈ 𝐹.
1) Addition and multiplication are closed
𝑎 + 𝑏 ∈ 𝐹 and 𝑎𝑏 ∈ 𝐹.
2) Addition and multiplication are associative:
𝑎+𝑏 +𝑐 =𝑎+ 𝑏+𝑐 ,
𝑎𝑏 𝑐 = 𝑎 𝑏𝑐

3) Addition and multiplication are commutative:


𝑎 + 𝑏 = 𝑏 + 𝑎, 𝑎𝑏 = 𝑏𝑎
4) The multiplicative operation distribute over addition:
𝑎 𝑏 + 𝑐 = 𝑎𝑏 + 𝑎𝑐
5) 0𝐹 is the additive identity:
0𝐹 + 𝑎 = 𝑎 + 0𝐹 = 𝑎
6) 1𝐹 is the multiplicative identity:
1𝐹 𝑎 = 𝑎 1𝐹 = 𝑎
7) Every element has additive inverse:
∃ − 𝑎 ∈ 𝐹, 𝑎 + −𝑎 = −𝑎 + 𝑎 = 0𝐹
8) Every non-zero element has multiplicative inverse:
If 𝑎 ≠ 0𝐹 , then ∃ 𝑎−1 ∈ 𝐹 such that 𝑎𝑎−1 = 𝑎−1 𝑎 = 1𝐹
Example: (ℚ, +, . ), (ℝ, +, . ) and (ℂ, +, . ) are all fields.
(ℤ, +, . ) is not a field because every non-zero element except
−1 and 1 has no multiplicative inverse.
Definition: A vector space 𝑉(𝐹) over a field ‘𝐹’ is a non-
empty set whose elements are called vectors, possessing
two operations ‘ + ’ (vector addition), and ‘. ’ (scalar
multiplication) which satisfy the following axioms:

For 𝑎 , 𝑏 and 𝑐 ∈ 𝑉 𝐹 and 𝛼, 𝛽 ∈ 𝐹.


1) Vector addition and scalar multiplication are closed:

𝑎 + 𝑏 ∈ 𝑉 𝐹 , 𝛼𝑎 ∈ 𝑉 𝐹 .
2) Commutativity:

𝑎+𝑏 =𝑏+𝑎
3) Associativity:

𝑎+𝑏 +𝑐 =𝑎+ 𝑏+𝑐

4) Existence of an additive identity:

∃ 0 ∈ 𝑉 𝐹 such that 𝑎 + 0 = 0 + 𝑎 = 𝑎
5) Existence of additive inverse:

∃ − 𝑎 ∈ 𝑉 𝐹 such that 𝑎 + −𝑎 = −𝑎 + 𝑎 = 0
6) Distributive Laws:

𝛼 𝑎 + 𝑏 = 𝛼𝑎 + 𝛼𝑏

7) 1𝐹 𝑎 = 𝑎
8) 𝛼𝛽 𝑎 = 𝛼 𝛽𝑎 .
Note: Throughout this course we use the field of real
numbers as scalar set. We may refer a vector space simply
Example:
1) The set of all 2×2 matrices with entries real numbers is
a vector space over the field of real numbers under usual
addition and scalar multiplication of matrices.
2) The set of all functions having real numbers as their
domain is a vector space over the field of real numbers
under the following operations.
𝒇+𝒈 𝒙 =𝒇 𝒙 +𝒈 𝒙
𝒄𝒇 𝒙 = 𝒄. 𝒇 𝒙
For all functions 𝑓 and 𝑔 and scalar 𝑐.
We now give a theorem that contains useful properties of
vectors. These are properties that were immediately
apparent for ℝ𝑛 and were taken almost for granted. They
are not, however, so apparent for all vector spaces.
Theorem: Let „V‟ be a vector space, 𝑣 a vector in 𝑉, 0 the
zero vector of 𝑉, ‘𝑐’ a scalar, and 0 the zero scalar. Then

(a) 0𝑣 = 0
(b) 𝑐𝑣 = 0
(c) −1 𝑣 = −𝑣
(d) If 𝑐𝑣 = 0, then either 𝑐 = 0 or 𝑣 = 0
Proof: (a) 0𝑣 + 0𝑣 = 0 + 0 𝑣 = 0𝑣
Add the negative of 0𝑣 namely −0𝑣 to both sides of this
equation.
0𝑣 + 0𝑣 + −0𝑣 = 0𝑣 + −0𝑣

⇒ 0𝑣 + 0𝑣 + −0𝑣 =0

⇒ 0𝑣 + 0 = 0

⇒ 0𝑣 = 0

(b) 𝑐0 = 𝑐 0 + 0

⇒ 𝑐0 + 𝑐0

⇒ 𝑐0 = 0.
(c) −1 𝑣 + 𝑣 = −1 𝑣 + 1𝑣
= −1 + 1 𝑣
= 0𝑣 = 0
Thus (-1) 𝑣 is the additive inverse of 𝑣.
i.e (-1) 𝑣 = −𝑣
(d) Assume that 𝑐 ≠ 0𝐹 .

Then ∃ 𝑐 −1 such that 𝑐 −1 𝑐 = 1𝐹

𝑐𝑣 = 0 ⇒ 𝑐 −1 𝑐𝑣 = 𝑐 −1 0

⇒ 𝑐 −1 𝑐𝑣 = 0

⇒ 1𝐹 𝑣 = 0

⇒ 𝑣 = 0.

SUBSPACES
Definition: Let 𝑉(𝐹) be a vector space. A non-empty subset
𝑈 ⊆ 𝑉 which is also a vector space under the inherited
operations of 𝑉 is called a vector subspace of 𝑉.

Example: 0 and 𝑉 are trivial vector subspaces of 𝑉.

Theorem: Let 𝑉(𝐹) be a vector space. Then U⊆ 𝑉, 𝑈 ≠ ∅ is a


subspace of 𝑉 if and only if for all 𝛼 ∈ 𝐹 and 𝑎, 𝑏 ∈ 𝑈 it is
verified that 𝑎 + 𝛼𝑏 ∈ 𝑈.
Proof: Let 𝑉(𝐹) be a vector space and let 𝑈 be a non-empty
subset of 𝑉. If 𝑈 is a subspace of 𝑉, then it is clear that
𝑎 + 𝛼𝑏 ∈ 𝑈 for all 𝛼 ∈ 𝐹and 𝑎, 𝑏 ∈ 𝑈.
Conversely, suppose that 𝑈 is non-empty subset of 𝑉 and
for all 𝛼 ∈ 𝐹 and 𝑎, 𝑏 ∈ 𝑈, 𝑎 + 𝛼𝑏 ∈ 𝑈.
We prove that 𝑈 is a subspace of 𝑉. That is, 𝑈 is a vector
space under the inherited operations of 𝑉.
1) Vector addition and scalar multiplication are closed,
commutative and Associative. By taking 𝛼 = −1, we get
−𝑣 ∈ 𝑈 for every 𝑣 ∈ 𝑈

So 0 = 𝑣 − 𝑣 ∈ 𝑈
All other properties hold as they hold in 𝑉.
Therefore ‘𝑈’ is a vector space with the same binary
operations as on 𝑉 and hence 𝑈 is a subspace of 𝑉.
Theorem: Let X⊆ 𝑉, 𝑌 ⊆ 𝑉 be vector subspaces of a vector
space 𝑉(𝐹). Then their intersection 𝑋 ∩ 𝑌 is also a vector
subspace of 𝑉.
Proof: Follows from the above Theorem.
Problem 1: Let 𝑢 = −1,4,3,7 and 𝑣 = −2, −3,1,0 be vectors
in ℝ4 . Find 𝑢 + 𝑣 and 3𝑢.

Solution: We get

𝑢 + 𝑣 = −1,4,3,7 + −2, −3,1,0 = −3,1,4,7


3𝑢 = 3 −1,4,3,7 = −3,12,9,21

Note that the resulting vector under each


operation is in the original vector space ℝ4 .
Problem 2: Let 𝑢 = 2,5, −3 , 𝑣 = −4,1,9 , 𝑤 = 4,0,2 .
Determine the vector 2𝑢 − 3𝑣 + 𝑤.

Solution: 2𝑢 − 3𝑣 + 𝑤 = 2 2,5, −3 − 3 −4,1,9 + 4,0,2

= 4,10, −6 — −12,3,27 + 4,0,2


= 4 + 12 + 4,10 − 3 + 0, −6 − 27 + 2

= 20,7, −31 .
Problem 3: Let ℂ denote the complex numbers and ℝ
denote the real numbers. Is ℂ a vector space over ℝ under
ordinary addition and multiplication? Is ℝ a vector space
over ℂ?

Solution: ℂ is a vector space over ℝ but ℝ is not a vector


space over ℂ sinceℝ is not closed under scalar
multiplication over ℂ.
Problem 4: Let 𝑉(𝐹) be a vector space, and let 𝑈1 ⊆ 𝑉 and
𝑈2 ⊆ 𝑉 be vector subspaces. Prove that if 𝑈1 ∪ 𝑈2 is a vector
subspace of 𝑉, then either 𝑈1 ⊆ 𝑈2 on 𝑈2 ⊆ 𝑈1
Solution: If 𝑈1 ⊆ 𝑈2 on 𝑈2 ⊆ 𝑈1 then it is trivial that 𝑈1 ∪ 𝑈2
is a subspace of V.

Suppose that 𝑈1 ⊈ 𝑈2 on 𝑈2 ⊈ 𝑈1

So ∃ 𝑢1 ∈ 𝑈1 and 𝑈2 ∈ 𝑈1 consider 𝑢1 + 𝑢2
Then 𝑢1 + 𝑢2 cannot be in 𝑈1 and 𝑈2 .

(If 𝑢1 + 𝑢2 ∈ 𝑈1 , then 𝑢1 + 𝑢2 − 𝑢1 = 𝑢2 ∈ 𝑈1 )

Therefore 𝑈1 ∪ 𝑈2 is not closed with respect to vector


addition. Hence 𝑈1 ∪ 𝑈2 is not a subspace of 𝑉.
Problem 5: Prove that 𝑋 = 𝑎, 𝑏, 𝑐, 𝑑 ∈ 𝑎 − 𝑏 − 3𝑑 = 0 is a
vector subspace of ℝ4 .

Solution: 𝑎1 , 𝑏1 , 𝑐1 , 𝑑1 + 𝛼 𝑎2 , 𝑏2 , 𝑐2 , 𝑑2 = (𝑎1 + 𝛼𝑎2 , 𝑏1 +


𝛼𝑏2 , 𝑐1 + 𝛼𝑐2 , 𝑑1 + 𝛼𝑑2 , 𝑎1 + 𝛼𝑎2 − 𝑏1 + 𝛼𝑏2 − 3 𝑑1 + 𝛼𝑑2 )

= 𝑎1 − 𝑏1 − 3𝜆1 + 𝛼 𝑎2 − 𝑏2 − 3𝜆2

= 0 + 𝛼 0 = 0.
Problem 6: Prove that 𝑋 = 𝑎, 2𝑎 − 3𝑏, 5𝑏, 𝑎 + 2𝑏, 𝑎 : 𝑎, 𝑏 ∈ 𝑅
is a vector subspace of ℝ5 .

Solution:

𝑎1 , 2𝑎1 − 3𝑏1 , 5𝑏1 , 𝑎1 + 2𝑏1 , 𝑎1


+ 𝛼 𝑎2 , 2𝑎2 − 3𝑏2 , 5𝑏2 , 𝑎2 + 2𝑏2 , 𝑎2
= (𝑎1 + 𝛼𝑎2 , 2 𝑎1 + 𝛼𝑎2 − 3 𝑏1 + 𝛼𝑏2 , 5 𝑏1 + 𝛼𝑏2 , 𝑎1 + 𝛼𝑎2 +
2 𝑏1 + 𝛼𝑏2 , 𝑎1 + 𝛼𝑎2 ).
Exercise
1. Compute the following vector expressions for 𝑢 = 1,2 ,
𝑣 = 4, −1 , and 𝑤 = (−3,5).
(a) 𝑢 + 3𝑣

(b) 2𝑢 + 3𝑣 − 𝑤

(c) −3𝑢 + 4𝑣 − 2𝑤
2. Prove that the set 𝐶 𝑛 with the operations of addition and
scalar multiplication defined as follow is a vector space
𝑢1 , … , 𝑢𝑛 + 𝑣1 , … , 𝑣𝑛 = 𝑢1 + 𝑣1 , … , 𝑢𝑛 + 𝑣𝑛

𝑐 𝑢1 , … , 𝑢𝑛 = 𝑐𝑢1 , … , 𝑐𝑢𝑛

Determine 𝑢 + 𝑣 and 𝑐𝑢 for the following vectors and scalars


in 𝐶 2 .

(a) 𝑢 = 2 − 𝑖, 3 + 4𝑖 , 𝑣 = 5,1 + 3𝑖 , 𝑐 = 3 − 2𝑖.

(b) 𝑢 = 1 + 5𝑖, −2 − 3𝑖 , 𝑣 = 2𝑖, 3 − 2𝑖 , 𝑐 = 4 + 𝑖.

3. Let 𝑊 be the set of vectors of the form 𝑎, 𝑎 2 , 𝑏 . Show


that 𝑊 is not a subspace of ℝ3 .

4. Prove that the set 𝑈 of 2 × 2 diagonal matrices is a


subspace of the vector space 𝑀22 of 2 × 2 matrices.

5. Let 𝑃𝑛 denote the set of real polynomial functions of


degree ≤ 𝑛. Prove that 𝑃𝑛 is a vector space if addition and
scalar multiplication are defined on polynomials in a point
wise manner.
6. Let 𝑊 be the set of vectors of the form 𝑎, 𝑎, 𝑎 + 2 . Show
that 𝑊 is not a subspace of ℝ3 .

7. Consider the sets of vectors of the following form. Prove


that the sets are subspaces of ℝ3 .

(a) 𝑎, 𝑏, 0

(b) 𝑎, 2𝑎, 𝑏

(c) 𝑎, 𝑎 + 𝑏, 3𝑎

8. Are the following sets subspaces of ℝ3 ? The set of all


vectors of the form 𝑎, 𝑏, 𝑐 where

(a) 𝑎 + 𝑏 + 𝑐 = 0

(b) 𝑎𝑏 = 0

(c) 𝑎𝑏 = 𝑎𝑐.

9. Prove that the following sets are not subspaces of ℝ3 .


The set of all vectors of the form

(a) (𝑎, 𝑎 + 1, 𝑏)

(b) 𝑎, 𝑏, 𝑎 + 𝑏 − 4 .
10. Let 𝑈 be the set of all vectors of the form 𝑎, 𝑏, 𝑐 and 𝑉
be the set of all vectors of the form 𝑎, 𝑎 + 𝑏, 𝑐 . Show that 𝑈
and 𝑉 are the same set. Is this set a subspace of ℝ3 ?

Answers
1. (a) 13, −1 (b) 17, −4 (c) 19, −20
2. (a) 7 − 𝑖, 4 + 7𝑖 (b) 4 − 7𝑖, 17 + 6𝑖

3. 𝑊 is not a subspace.

4. It is a vector space of matrices, embedded in 𝑀22


5. Subspace

6. Not a subspace

7. (a) The set is the 𝑥𝑦 plane

(b) The set is the plane given by the equation 𝑦 = 2𝑥.

(c) The set is the plane given by the equation 𝑧 = 3𝑥.


8. (a) Subspace (b) Not a subspace (c) Not a subspace

9. (a) If 𝑎, 𝑎 + 1, 𝑏 = 0,0,0 then 𝑎 = 𝑎 + 1 = 𝑏 = 0. 𝑎 = 𝑎 +


1 is impossible. That is, the set does not contain the zero
vector.

(b) If 𝑎, 𝑏, 𝑎 + 𝑏 − 4 = 0,0,0 , then 𝑎 = 𝑏 = 0 and −4 = 0 Not


possible. That is, the set does not contain the zero vector.
10. Yes.
1.3
Linear Combination of Vectors

Observe that any vector (𝑎, 𝑏, 𝑐) in the vector space can be


written as
(𝑎, 𝑏, 𝑐) = 𝑎(1,0,0) + 𝑏(0, 1 ,0) + 𝑐(0, 0,1)
The vector 1,0,0 , 0,1,0 and (0,1,0) in some sense
characterize the vector space ℝ3 . We pursue this approach
to understanding vector spaces in terms of certain vectors
that represent the whole space.
Definition: Let 𝑣1 , 𝑣2 , … , 𝑣𝑚 be vectors in a vector space 𝑉.
We say that 𝑣 , a vector in 𝑉 , is a linear combination of
𝑣1 , 𝑣2 , … , 𝑣𝑚 if there exists scalars of 𝑐1 , 𝑐2 , … , 𝑐𝑚 such that ‘𝑣’
can be written as
𝑣 = 𝑐1 𝑣1 + 𝑐2 𝑣2 + ⋯ + 𝑐𝑚 𝑣𝑚
Example: The vector (5,4,2) is a linear combination of the
vectors 1,2,0 (3,1,4) and (1,0,3). Since it can be written as
(5,4,2) = (1,2,0) + 2(3,1,4) − 2(1,0,3).
DEFINITION: The vectors 𝑣1 , 𝑣2 , … … … . , 𝑣𝑚 are said to span
a vector space if every vector in the space can be expressed
as a linear combination of these vectors.
A spanning set of vectors in a sense defines the vector
space, since every vector in the space can be obtained from
this set.
We have developed the mathematics for looking at a
vector space in terms of a set of vectors that spans the
space. It is also useful to be able to do the converse,
namely to use a set of vectors to generate a vector space.
THEOREM: Let 𝑣1 , 𝑣2 , … , 𝑣𝑚 be vectors in a vector space 𝑉.
Let 𝑈 be the set consisting of all linear combinations of
𝑣1 , 𝑣2 , … , 𝑣𝑚 . Then 𝑈 is a subspace of 𝑉 spanned by the
vectors 𝑣1 , 𝑣2 , … , 𝑣𝑚 . 𝑈 is said to be the vector space
generated by 𝑣1 , 𝑣2 , … , 𝑣𝑚 .
Proof: Let 𝑢1 = 𝑎1 𝑣1 + ⋯ + 𝑎𝑚 𝑣𝑚 and 𝑢2 = 𝑏1 𝑣1 + ⋯ + 𝑏𝑚 𝑣𝑚
be arbitrary elements of 𝑈. Then
𝑢1 + 𝑢2 = 𝑎1 𝑣1 + ⋯ + 𝑎𝑚 𝑣𝑚 + 𝑏1 𝑣1 + ⋯ + 𝑏𝑚 𝑣𝑚
= 𝑎1 + 𝑏1 𝑣1 + 𝑎𝑚 + 𝑏𝑚 𝑣𝑚 .
𝑢1 + 𝑢2 is a linear combination of 𝑣1 , 𝑣2 , … , 𝑣𝑚 . Thus 𝑢1 + 𝑢2
is in 𝑈. vector addition.
Let ‘𝑐’ be an arbitrary scalar. Then
𝑐𝑢1 = 𝑐 𝑎1 𝑣1 + ⋯ + 𝑎𝑚 𝑣𝑚 = 𝑐𝑎1 𝑣1 + ⋯ + 𝑐𝑎𝑚 𝑣𝑚
𝑐𝑢1 is a linear combination of 𝑣1 , 𝑣2 , … , 𝑣𝑚 . Therefore 𝑐𝑢1
is in 𝑈. 𝑈 is closed under scalar multiplication. Thus 𝑈 is a
subspace of 𝑣.
By the definition of 𝑈, every vector in 𝑈 can be written
as a linear combination of 𝑣1 , 𝑣2 , … , 𝑣𝑚 . Thus 𝑣1 , 𝑣2 , … , 𝑣𝑚
span 𝑈.
Problem 1: Determine whether or not the vector (−1,1,5) is
a linear combination of the vectors (1,2,3)(0,1,4) and (2,3,6)
Solution: We examine the identity
𝐶1 (1,2,3) + 𝐶2 (0,1,4) + 𝐶3 (2,3,6) = (−1,1,5)

Can we find scalars 𝐶1 , 𝐶2 and 𝐶3 such that this identity


holds?

Using the operations of addition and scalar multiplication


we get

(𝐶1 + 2𝐶3 , 2𝐶1 + 𝐶2 + 3𝐶3 , 3𝐶1 + 4 𝐶2 + 6𝐶3 ) = (−1, ,5)


Equating components leads to the following system of
linear equations.

𝐶1 + 2𝐶3 = −1

2𝐶1 + 𝐶2 + 3𝐶3 = 1

3𝐶1 + 4 𝐶2 + 6𝐶3 = 5
It can be shown that this system of equations has the
unique solution.

𝐶1 = 1, 𝐶2 = 2, 𝐶3 = −1.

Thus the vector (−1,1,5) has the following linear


combination of the vectors (1,2,3)(0,1,4) and (2,3,6)
(−1,1,5) = (1,2,3) + 2(0,1,4) − 1(2,3,6).
Problem 2: Express the vector (4,5,5) as a linear
combination of the vectors (1,2,3), (−1,1,4) and (3,3,2)

Solution: Examine the following indentify for values of 𝐶1 ,


𝐶2 and 𝐶3 .
𝐶1 (1,2,3) + 𝐶2 (−1,1,4) + 𝐶3 (3,3,2) = (4,5,5)

We get (𝐶1 − 𝐶2 + 3𝐶3 , 2𝐶1 + 𝐶2 + 3𝐶3 , 3𝐶1 + 4 𝐶2 + 2𝐶3 ) =


( 4,5,5)
Equating components leads to the following system of
linear equations.
𝐶1 − 𝐶2 + 3𝐶3 = 4

2𝐶1 + 𝐶2 + 3𝐶3 = 5

3𝐶1 + 4 𝐶2 + 2𝐶3 = 5
This system of equations has many solutions,

𝐶1 = −2𝑟 + 3, 𝐶2 = 𝑟 − 1, 𝐶3 = 𝑟

Thus the vector can be expressed in many ways as a linear


combination of the vectors (1,2,3), (−1,1,4) and (3,3,2)
( 4,5,5) = (−2𝑟 + 3) (1,2,3) + (𝑟 − 1) (−1,1,4) + 𝑟(3,3,2)
For example,

𝑟 = 3 gives ( 4,5,5) = −3 (1,2,3) + 2(−1,1,4) + 3(3,3,2)

𝑟 = −1 gives ( 4,5,5) = 5 (1,2,3) − 2(−1,1,4) − (3,3,2).


Problem 3: Show that the vector (3, −4, −6) cannot be
expressed as a linear combination of the vectors
(1,2,3) (−1, −1, −2) and (1,4,5)
Solution: Consider the identity

𝐶1 (1,2,3) + 𝐶2 (−1, −1, −2) + 𝐶3 (1,4,5) = (3, −4, −6)

This identity leads to the following system of linear


equations.

𝐶1 − 𝐶2 + 𝐶3 = 3
2𝐶1 − 𝐶2 + 4𝐶3 = −4

3𝐶1 − 2 𝐶2 + 5𝐶3 = 6

This system has no solution. Thus (3, −4, −6) is not a linear
combination of the vectors

(1,2,3) (−1, −1, −2) and (1,4,5).


Problem 4: Show that the vectors (1,2,0), (0,1, −1) and
(1, 1,2) span ℝ3 .

Solution: Let (𝑥, 𝑦, 𝑧) be an arbitrary element of ℝ3 .

We have to determine whether we can write (𝑥, 𝑦, 𝑧) =


𝐶1 (1,2,0) + 𝐶2 (0,1, −1) + 𝐶3 (1, 1,2).
Multiply and add the vectors to get

(𝑥, 𝑦, 𝑧) = ( 𝐶1 + 𝐶3 , 2𝐶1 + 𝐶2 + 𝐶3 , −𝐶2 + 2𝐶3 )

Thus, 𝐶1 + 𝐶3 = 𝑥
2𝐶1 + 𝐶2 + 𝐶3 = 𝑦
−𝐶2 + 2𝐶3 = 𝑧

This system of equations in the variables 𝐶1 , 𝐶2 and 𝐶3 is


solved by the method of Gauss-Jordon elimination. It is
found to have the solution

𝐶1 = 3𝑥 − 𝑦 − 𝑧,

𝐶2 = −4𝑥 + 2𝑦 + 𝑧,

𝐶3 = −2𝑥 + 𝑦 + 𝑧.

We can write an arbitrary vector of ℝ3 as a linear


combination of these vectors as follows.

(𝑥, 𝑦, 𝑧) = (3𝑥 − 𝑦 − 𝑧) (1,2,0) + (−4𝑥 + 2𝑦 + 𝑧) (0,1, −1)


+ (−2𝑥 + 𝑦 + 𝑧) (1, 1,2).

The vectors (1,2,0), (0,1, −1) and (1, 1,2) span ℝ3 .


Problem 5: Let 𝑣1 and 𝑣2 span a subspace 𝑈 of a vector
space 𝑉. Let 𝑘1 and 𝑘2 be non-zero scalars. Show that 𝑘1 𝑣1
and 𝑘2 𝑣2 also span 𝑈.
Solution: Let 𝑣 be a vector in

Since 𝑣1 and 𝑣2 span 𝑈. There exists scalars 𝑎 and 𝑏 such


that

𝑣 = 𝑎 𝑣1 + 𝑏 𝑣2
we can write
𝑎 𝑏
𝑣= 𝑘1 𝑣1 + 𝑘2 𝑣2
𝑘1 𝑘2

Thus the vectors 𝑘1 𝑣1 and 𝑘2 𝑣2 span 𝑈.


Problem 6: Let ‘𝑈’ be the subspace generated by the
vectors (1, 2, 0) and (−3,1,2). Let 𝑉 be the subspace of ℝ3
generated by the vectors (−1,5,2) and (4,1, −2). Show that
𝑈 = 𝑉.

Solution: Let ‘𝑢’ be a vector in 𝑈. Let us show that 𝑢 is in


𝑉.

Since 𝑢 is in 𝑈, there exists scalars 𝑎 and 𝑏 such that


𝑢 = 𝑎 (1, 2, 0) + 𝑏 (−3,1,2)

= (𝑎 − 3𝑏, 2𝑎 + 𝑏, 2𝑏)

Let us see if we can write u as a linear combination of


(−1,5,2) and (4,1, −2)
𝑢 = 𝑝(−1,5,2) + 𝑞 (4,1, −2)

= (−𝑝 + 4𝑞, 5𝑝 + 𝑞, 2𝑝 − 2𝑞)

Such 𝑝 and 𝑞 would have to satisfy


−𝑝 + 4𝑞 = 𝑎 − 3𝑏

5𝑝 + 𝑞 = 2𝑎 + 𝑏

2𝑝 − 2𝑞 = 2𝑏.
𝑎+𝑏 𝑎−2𝑏
This system of eqs has unique solution 𝑝 = ,𝑞 = .
3 3

Thus 𝑢 can be written as


𝑎+𝑏 𝑎−2𝑏
𝑝= −1,5,2 + (4,1, −2).
3 3
Therefore 𝑢 is a vector in 𝑉. Conversely, let 𝑣 be a vector in
𝑉. Similar to the above we can show that 𝑣 is in 𝑈.
Therefore 𝑈 = 𝑉.
Exercise
1. Let 𝑈 be the vector space generated by the functions
𝑓 𝑥 = 𝑥 + 1 and 𝑔 𝑥 = 2𝑥 2 − 2𝑥 + 3. Show that the function
𝑕 𝑥 = 6𝑥 2 − 10𝑥 + 5 lies in 𝑈.

2. In the following sets of vectors, determine whether the


first vector is a linear combination of the other vectors.

(a) −3,3,7 ; 1, −1,2 , 2,1,0 , (−1,2,1)

(b) 0,10,8 ; −1,2,3 , 1,3,1 , (1,8,5)


3. Determine whether the following vectors span ℝ3 .

(a) 2,1,0 , −1,3,1 , (4,5,0)


(b) 1,2,1 , −1,3,0 , (0,5,1)

4. Give three other vectors in the subspace of ℝ3 generated


by the vectors 1,2,3 , (1,2,0).

5. Let 𝑈 be the subspace of ℝ3 generated by the vectors


(3, −1,2) and (1,0,4). Let 𝑉 be the subspace of ℝ3 generated
by the vectors (4, −1,6) and (1, −1, −6). Show that 𝑈 = 𝑉.
6. In each of the following, determine whether the first
function is a linear combination of the functions that
follow:

(a)𝑓 𝑥 = 3𝑥 2 + 2𝑥 + 9; 𝑔 𝑥 = 𝑥 2 + 1, 𝑕 𝑥 = 𝑥 + 3

(b) 𝑓 𝑥 = 𝑥 2 + 4𝑥 + 5; 𝑔 𝑥 = 𝑥 2 + 𝑥 − 1, 𝑕 𝑥 = 𝑥 2 + 2𝑥 + 1

7. Let 𝑣, 𝑣1 and 𝑣2 . be vectors in a vector space 𝑉. Let v be


a linear combination of 𝑣1 and 𝑣2 . If 𝑐1 and 𝑐2 are nonzero
scalars, show that 𝑣 is also a linear combination of 𝑐1 𝑣1 and
𝑐2 𝑣2 .

Answers
2. (a) −3,3,7 = 2 1, −1,2 − 2,1,0 + 3 −1,2,1
(b) 0,10,8 = 2 − 𝑐 −1,2,3 + 2 − 2𝑐 1,3,1 + 𝑐 1,8,5 ,
whether c is any real number

3. (a) Span (b) Do not span

4. e.g., 1,2,3 + 1,2,0 = 2,4,3 , 1,2,3 − 1,2,0


= 0,0,3 , 2 1,2,3 = (2,4,6).
1.4
Linear Dependence and Independence
In this module, we continue the development of vector
space structure. We introduce concepts of dependence and
independence of vectors. These will be useful tools in
constructing “efficient” spanning sets for vector spaces–sets
in which there are no redundant vectors.
Let us motivate the idea of dependence of vectors. Observe
that the vector (4, −1,0) is a linear combination of the
vectors (2,1,3) and (0,1,2) since it can be written as
(4, −1,0) = 2(2,1,3) − 3(0,1,2).
The above equation can be rewritten in a number of ways.
Each vector can be expressed in terms of the other vectors.
(2,1,3) = (1/2) (4, −1,0) + (3/2) (0,1,2)

(0,1,2) = (2/3) (2,1,3) − (1/3) (4, −1,0).

Each of the three vectors is, in fact, dependent on the other


two vectors. We express this by writing.

(4, −1,0) − 2(2,1,3) + 3(0,1,2) = (0,0,0).


This concept of dependence of vectors is made precise with
the following definition.

DEFINITION: (a) The set of vectors { 𝑣1 , … , 𝑣𝑚 } in a vector


space 𝑉 is said to be linearly dependent if there exists
scalars 𝑐1 , … , 𝑐𝑚 not all zero, such that 𝑐1 𝑣1 + ⋯ + 𝑐𝑚 𝑣𝑚 = 0.
(b) The set of vectors { 𝑣1 , … , 𝑣𝑚 } is linearly independent if
𝑐1 𝑣1 + ⋯ + 𝑐𝑚 𝑣𝑚 = 0 can only be satisfied when 𝑐1 =
0, … . , 𝑐𝑚 = 0.
We now present an important result that relates the
concepts of linear dependence and linear combination.
THEOREM: A set consisting of two or more vectors in a
vector space is linearly dependent if and only if it is
possible to express one of the vectors as a linear
combination of the other vectors.

Proof: Let the set { 𝑣1 , 𝑣2 … , 𝑣𝑚 } be linearly dependent.

Therefore, there exists scalars 𝑐1 , 𝑐2 , … , 𝑐𝑚 not all zero, such


that
𝑐1 𝑣1 + 𝑐2 𝑣2 + ⋯ , 𝑐𝑚 𝑣𝑚 = 0

Assume that 𝑐1 ≠ 0.
The above identity can be rewritten as
−𝑐2 −𝑐𝑚
𝑣1 = 𝑣2 + ⋯ + 𝑣
𝑐1 𝑐𝑚 𝑚

Thus, v1 is a linear combination of 𝑣2 , … , 𝑣𝑚 . Conversely,


assume that 𝑣1 is a linear combination of 𝑣2 , … , 𝑣𝑚 .

Therefore there exists scalars 𝑑2 , … , 𝑑𝑚 such that

𝑣1 = 𝑣2 𝑑2 , … , 𝑑𝑚 𝑣𝑚 .

⇒ 1𝑣1 + −𝑑2 𝑣2 + ⋯ + −𝑑𝑚 𝑣𝑚 = 0.

Thus the set { 𝑣1 , 𝑣2 , … , 𝑣𝑚 } is linearly dependent,


completing the proof.
THEOREM: Let 𝑉 be a vector space. Any set of vectors in 𝑉
that contains the zero vector is linearly dependent.

Proof: Consider the set 0, 𝑣2 , … 𝑣𝑚 , which contains the zero


vector. Let us examine the identity.

𝑐1 0 + 𝑐2 𝑣2 + ⋯ + 𝑐𝑛 𝑣𝑛 = 0.

We see that the identity is true for 𝑐1 = 1, 𝑐2 = 0, … , 𝑐𝑚 = 0


(not all zero). Thus the set of vectors is linearly dependent,
proving the theorem.

THEOREM: Let the set {𝑣1 , … , 𝑣𝑚 } be linearly dependent in


a vector space 𝑉. Any set of vectors in 𝑉 that contains these
vectors will also be linearly dependent.

Proof: Since the set {𝑣1 , … , 𝑣𝑚 } is linearly dependent, there


exists scalars 𝐶1 , … 𝐶𝑚 , not all zero, such that 𝑐1 𝑣1 + ⋯ +
𝑐𝑚 𝑣𝑚 = 0.

Consider the set of vectors, which contains the given


vectors.

There are scalars, not all zero, namely 𝐶1 , 𝐶2 , … 𝐶𝑚 , 0, … 0


such that
𝐶1 𝑣1 + ⋯ + 𝐶𝑚 𝑣𝑚 + 0𝑣𝑚 + 1 + ⋯ . . +0𝑣𝑛 = 0.

Thus the set{𝑣1 , … , 𝑣𝑚 , 𝑣𝑚+1 , … 𝑣𝑛 } is linearly dependent.


Problem 1: Show that the set {(1,2,3), (−2,1,1), (8,6,10)} is
linearly dependent in ℝ3 .

Solution: Let us examine the identity 𝐶1 (1,2,3) +


𝐶2 (−2,1,1) + 𝐶3 (8,6,10) = 0. We want to show that at least
one of the C’s can be nonzero. We get
(𝐶1 − 2𝐶2 + 8𝐶3 , 2𝐶1 + 𝐶2 + 6𝐶3 , 3𝐶1 + 𝐶2 + 10𝐶3 ) = (0,0,0).
Equating each component of this vector to zero gives the
system of equations.

𝐶1 − 2𝐶2 + 8𝐶3 = 0

2𝐶1 + 𝐶2 + 6𝐶3 = 0
3𝐶1 + 𝐶2 + 10𝐶3 = 0

This system has the solution 𝐶1 = 4, 𝐶2 = −2, 𝐶3 = −1. Since


at least one of the 𝐶’s is nonzero, the set of vectors is
linearly dependent.
The Linear dependence is expressed by the equation

4(1,2,3) − 2(−2,1,1) − (8,6,10) = 0.


Problem 2: Show that the set {(3, −2,2), (3, −1,4), (1,0,5)} is
linearly dependent in ℝ3 .

Solution: We examine the identity𝐶1 (3, −2,2) + 𝐶2 (3, −1,4) +


𝐶3 (1,0,5) = 0.

We want to show that this identity can only hold if


𝐶1 , 𝐶2 and 𝐶3 are all zero. We get

(3𝐶1 + 3𝐶2 + 𝐶3 , −2𝐶1 − 𝐶2 , 2𝐶1 + 4𝐶2 + 5𝐶3 ) = 0.

Equating the components to zero gives.

3 𝐶1 + 3 𝐶2 + 𝐶3 = 0

−2 𝐶1 − 𝐶2 = 0
2 𝐶1 + 4 𝐶2 + 5 𝐶3 = 0

This system has the solution 𝐶1 = 0, 𝐶2 = 0, 𝐶3 = 0. Since


the set of vectors is linearly independent.
Problem 3: Let the set {𝑣1 , 𝑣2 } be linearly independent.
Prove that {𝑣1 + 𝑣2 , 𝑣1 − 𝑣2 } is also linearly independent.
Solution: Let us examine the identity.

𝑎(𝑣1 + 𝑣2 ) + 𝑏(𝑣1 − 𝑣2 ) ______(1)

If we can show that this identity implies 𝑎 = 0 and 𝑏 = 0,


then {𝑣1 + +𝑣2 , 𝑣1 − 𝑣2 } will be linearly independent,
we get
𝑎𝑣1 + 𝑎𝑣2 + 𝑏𝑣1 − 𝑏𝑣2 = 0

⇒ 𝑎 + 𝑏 𝑣1 + 𝑎 − 𝑏 𝑣2 = 0

Since {𝑣1 , 𝑣2 } is linearly independent 𝑎 + 𝑏 = 0, 𝑎 – 𝑏 = 0

This system has the unique solution 𝑎 = 0, 𝑏 = 0.


EXCERCISE
1. Find values of ‘𝑡’ for which the following sets are
linearly dependent.

(a) {(−1, 2), (𝑡, −4)}

(b) {(2, −𝑡), (2𝑡 + 6,4𝑡)}.

2. Let the set {𝑣1 , 𝑣2 , 𝑣3 } be linearly dependent in a vector


space 𝑉 . Let ‘𝑐’ be a non-zero scalar. Prove that the
following sets are also linearly dependent.

(a) {𝑣1 , 𝑣1 + 𝑣2 , 𝑣3 }

(b) {𝑣1 , 𝑐𝑣2 , 𝑣3 }

(c) {𝑣1 , 𝑣1 + 𝑐𝑣2 , 𝑣3 } .


3. Same question as above replacing linearly dependent
with linearly independent.
4. Let a set ‘𝑆’ be linearly independent in a vector space
𝑉 . Prove that every subset of 𝑆 is also linearly
independent. Let 𝑃 be linearly dependent. Is every
subset of 𝑃 linearly dependent?
5. Let {𝑣1 , 𝑣2 } be linearly independent in a vector space 𝑉.
Show that if a vector 𝑣3 is not of the form 𝑎𝑣1 + 𝑏𝑣2 ,
then the set {𝑣1 , 𝑣2 , 𝑣3 } is linearly independent.
6. Prove that a set of two or more vectors in a vector
space is linearly independent if no vector in the set can
be expressed as a linear combination of the other
vectors.
Answers
1. (a) 2
(b) -7
1.5
BASES AND DIMENSION

DEFINITION: A finite set of vectors 𝑣1 , … . . , 𝑣𝑚 is called a


basis for a vector space V, if the set spans V and is linearly
independent.
Intuitively, a basis is an efficient set for characterizing a
vector space, in that any vector can be expressed as a
linear combination of the basis vectors, and the basis
vectors are independent of one another.
Example: The set of ‘n’ vectors { 1,0, … ,0 , 0,1,0, … ,0 , …,
0, … ,0,1 } is a basis for ℝ𝑛 . This basis is called the standard
basis for ℝ𝑛 .
THEOREM: Let 𝑣1 , 𝑣2 , … , 𝑣𝑛 be a basis be a basis for a
vector space 𝑉. If 𝜔1 , 𝜔2 , … , 𝜔𝑚 is a set of more than 𝑛
vectors in 𝑉, then this set is linearly dependent.
Proof: We examine the identity 𝑐1 𝜔1 + ⋯ . +𝑐𝑚 𝜔𝑚 = 0 …..(1)
We will show that values of 𝑐1 , … . . , 𝑐𝑚 , not all zono, exist,
satisfying this identity and proving that the vectors are
linearly dependent.
Since the set 𝑣1 , 𝑣2 , … , 𝑣𝑛 is a basis for 𝑉, each of the
vectors 𝜔1 , 𝜔2 , … , 𝜔𝑚 can be expressed as a linear
combination of 𝑣1 , 𝑣2 , … . , 𝑣𝑛
Let 𝜔1 = 𝑎11 𝑣1 + 𝑎12 𝑣2 + ⋯ . +𝑎1𝑛 𝑣𝑛
𝜔2 = 𝑎21 𝑣1 + 𝑎22 𝑣2 + ⋯ . +𝑎2𝑛 𝑣𝑛
.
.
𝜔𝑚 = 𝑎𝑚1 𝑣1 + 𝑎𝑚2 𝑣2 + ⋯ . +𝑎𝑚𝑛 𝑣𝑛
Substituting these values in (1) we get
𝑐1 𝑎11 𝑣1 + 𝑎12 𝑣2 + ⋯ . +𝑎1𝑛 𝑣𝑛 + ⋯ + 𝑐𝑚 𝑎𝑚1 𝑣1 + 𝑎𝑚2 𝑣2 +
⋯ . +𝑎𝑚𝑛 𝑣𝑛 = 0
Rearranging, we get
𝑐1 𝑎11 + 𝑐2 𝑎21 + ⋯ + 𝑐𝑚 𝑎𝑚1 𝑣1 + ⋯ + 𝑐1 𝑎1𝑛 + 𝑐2 𝑎2𝑛 + ⋯ +
𝑐𝑚 𝑎𝑚𝑛 𝑣𝑛 = 0

Since 𝑣1 , 𝑣2 , … , 𝑣𝑛 are linearly independent, we get


𝑎11 𝑐1 + 𝑎21 𝑐2 + ⋯ + 𝑎𝑚1 𝑐𝑚 = 0
.
.
𝑎1𝑛 𝑐1 + 𝑎2𝑛 𝑐2 + ⋯ + 𝑎𝑚𝑛 𝑐𝑚 = 0
Thus finding c’s that satisfy equation (1) reduces to finding
solutions to this system of ′𝑛′ equations in ′𝑚′ variables.
Since 𝑚 > 𝑛, the number of variables is greater than the
number of equations. We know that such a system of
homogeneous equations has many solutions.
Therefore, there are non-zero values of c’s that satisfy
equation(1). Thus the set 𝜔1 , 𝜔2 , … , 𝜔𝑚 is linearly
dependent.
THEOREM: Any two bases for a vector space 𝑉 consist of
the same number of vectors.
Proof: Let 𝑣1 , 𝑣2 , … , 𝑣𝑛 and 𝜔1 , 𝜔2 , … , 𝜔𝑚 be two bases for
𝑉 . If we interpret 𝑣1 , 𝑣2 , … , 𝑣𝑛 as a basis for 𝑉 and
𝜔1 , 𝜔2 , … , 𝜔𝑚 as a set of linearly independent vectors in 𝑉,
than the previous theorem tells us that 𝑚 ≤ 𝑛. conversely, if
we interpret 𝜔1 , 𝜔2 , … , 𝜔𝑚 as a basis for 𝑉 and 𝑣1 , 𝑣2 , … , 𝑣𝑛
as a set of linearly independent vectors in 𝑉, then 𝑛 ≤ 𝑚.
Thus 𝑛 = 𝑚, proving that both the bases consists of same
number of vectors.
DEFINITION: If a vector space 𝑉 has a basis consisting of
′𝑛′ vectors, than the dimension of 𝑉 is said to be 𝑛. we write
dim 𝑣 for dimension of 𝑉.
EXAMPLE: The set of ′𝑛′ vectors 1,0, … ,0 , … , 0, … 0,1
forms a basis (the stranded basis) for ℝ𝑛 . Thus the
dimension of ℝ𝑛 is ′𝑛′.
Note that we have defined a basis for a vector space to be
a finite set of vectors that spans the space and is linearly
independent. Such a set does not exist for all vector
spaces. When such a finite set exists, we say that the
vector space is finite dimensional. If such a finite set does
not exist, we say that the vector space is infinite
dimensional.
THEOREM: Let 𝑣1 , 𝑣2 , … , 𝑣𝑛 be a basis for a vector space 𝑉.
Then each vector in 𝑉 can be expressed uniquely as a
linear combination of these vectors.
Proof: Let ′𝑣′ be a vector in 𝑉. Since 𝑣1 , 𝑣2 , … , 𝑣𝑛 is a basis,
we can express 𝑣 as a linear combination of these vectors.
Suppose we can write
𝑣 = 𝑎1 𝑣1 + 𝑎2 𝑣2 + ⋯ + 𝑎𝑛 𝑣𝑛 and
𝑣 = 𝑏1 𝑣1 + 𝑏2 𝑣2 + ⋯ + 𝑏𝑛 𝑣𝑛 then
𝑎1 𝑣1 + 𝑎2 𝑣2 + ⋯ + 𝑎𝑛 𝑣𝑛 = 𝑏1 𝑣1 + 𝑏2 𝑣2 + ⋯ + 𝑏𝑛 𝑣𝑛
⟹ 𝑎1 − 𝑏1 𝑣1 + 𝑎2 − 𝑏2 𝑣2 + ⋯ + 𝑎𝑛 − 𝑏𝑛 𝑣𝑛 = 0
Since 𝑣1 , 𝑣2 , … , 𝑣𝑛 is a basis, the vectors 𝑣1 , 𝑣2 , … , 𝑣𝑛 are
linearly independent. Thus 𝑎1 − 𝑏1 = 0, … , 𝑎𝑛 − 𝑏𝑛 = 0
implying that 𝑎1 = 𝑏1 , … , 𝑎𝑛 = 𝑏𝑛
Therefore there is only one way of expressing 𝑣 as a linear
combination of the basis.
Lemma. Let S be a linearly independent subset of a vector
space V. Suppose β is a vector in V which is not in the
subspace spanned by S. Then the set obtained by adjoining
β to S is linearly independent.
Proof: Suppose 𝛼1 , … , 𝛼𝑚 are distinct vectors in S and
that 𝑐1 𝛼1 + ⋯ + 𝑐𝑚 𝛼𝑚 + 𝑏𝛽 = 0.
Then 𝑏 = 0; for otherwise,
𝑐1 𝑐𝑚
𝛽= − 𝛼1 + ⋯ + − 𝛼𝑚
𝑏 𝑏
and β is in the subspace spanned by S. Thus 𝑐1 𝛼1 + ⋯ +
𝑐𝑚 𝛼𝑚 = 0, and since S is a linearly independent set each
𝑐𝑖 = 0.
Theorem: If W is a subspace of finite-dimensional vector
space V, every linearly independent subset of W is finite
and is part of a basis for W.
Proof: Suppose S0 is a linearly independent subset of W. If
S is a linearly independent subset of W containing S0, then
S is also a linearly independent subset of V; since V is
finite-dimensional, S contains no more than dim V
elements.
We extend S0, to a basis for W, as follows. If S0 spans
W, then S0 is basis for W and we are done. If S0 does not
span W, we use the preceding lemma to find a vector β1 in
W such that the set 𝑆1 = 𝑆0 ∪ β1 is independent. If S1
spans W, fine. If not, apply the lemma to obtain a vector β 2
in W such that 𝑆2 = 𝑆1 ∪ β2 is independent. If we continue
in this way, then (in not more than dim V steps) we reach a
set
𝑆𝑚 = 𝑆0 ∪ β1 , … , βm
Which is a basis for W.
Suppose that a vector space is known to be a dimension
𝑛. The following theorem tells us that we do not have to
check both linear independence and spanning conditions
to see if a given set is a basis.
THEROM: Let 𝑉 be a vector space of dimension 𝑛.
a) If 𝑆 = 𝑣1 , 𝑣2 , … , 𝑣𝑛 is a set of 𝑛 linearly independent
vectors in 𝑉, then 𝑆 is a basis for 𝑉.
b) If 𝑆 = 𝑣1 , 𝑣2 , … , 𝑣𝑛 is a set of 𝑛 vectors that spans 𝑉,
then 𝑆 is a basis for 𝑉.
Proof: (a) part is clear from the above theorem and the fact
that every basis of V contains n number of elements.
(b) It is enough to show that S is linearly independent.
Let 𝑢, 𝑢2 , … , 𝑢𝑛 be a basis of V. If we give a proof similar to
the first theorem of this material and by using the fact that
a homogeneous system of linear equations with equal
number of variables and equations will have unique
solution, we can prove that 𝑣1 , 𝑣2 , … , 𝑣𝑛 is linearly
independent.
THEOREM: If W1 and W2 are finite-dimensional subspaces
of a vector space V, then 𝑊1 + 𝑊2 is finite-dimensional and
dim 𝑊1 + 𝑑𝑖𝑚 𝑊2 = 𝑑𝑖𝑚 𝑊1 ∩ 𝑊2 + 𝑑𝑖𝑚 𝑊1 + 𝑊2
Proof. By Theorem 5 and its corollaries, 𝑊1 ∩ 𝑊2 has a finite
basis 𝛼1 , … , 𝛼𝑘 which is part of a basis
𝛼1 , … , 𝛼𝑘 , 𝛽1 , … , 𝛽𝑚 for W1
and part of basis
𝛼1 , … , 𝛼𝑘 , 𝛾1 , … , 𝛾𝑛 for W2.
The subspace 𝑊1 + 𝑊2 is spanned by the vectors
𝛼1 , … , 𝛼𝑘 , 𝛽1 , … , 𝛽𝑚 , 𝛾1 , … , 𝛾𝑛 and these vectors form an
independent set. For suppose
𝑥𝑖 𝛼𝑖 + 𝑦𝑗 𝛽𝑗 + 𝑧𝑟 𝛾𝑟 = 0.
Then
− 𝑧𝑟 𝛾𝑟 = 𝑥𝑖 𝛼𝑖 + 𝑦𝑗 𝛽𝑗

which shows that 𝑧𝑟 𝛾𝑟 belongs to 𝑊1 . As 𝑧𝑟 𝛾𝑟 Also


belongs to 𝑊2 it follows that
𝑧𝑟 𝛾𝑟 = 𝑐𝑖 𝛼𝑖
for certain scalars 𝑐1 , … , 𝑐𝑘 . Because the set
𝛼1 , … , 𝛼𝑘 , 𝛾1 , … , 𝛾𝑛
is independent, each of the scalars 𝑧𝑟 = 0. Thus,
𝑥𝑖 𝛼𝑖 + 𝑦𝑗 𝛽𝑗 = 0

and since
𝛼1 , … , 𝛼𝑘 , 𝛽1 , … , 𝛽𝑚
is also an independent set, each 𝑥𝑖 = 0 and each 𝑦𝑗 = 0 .
Thus 𝛼1 , … , 𝛼𝑘 , 𝛽1 , … , 𝛽𝑚 , 𝛾1 , … , 𝛾𝑛
is a basis for 𝑊1 + 𝑊2 . Finally
dim 𝑊1 + dim 𝑊2 = 𝑘 + 𝑚 + 𝑘 + 𝑛
=𝑘+ 𝑚+𝑘+𝑛
= dim 𝑊1 ∩ 𝑊2 + dim 𝑊1 + 𝑊2 .
1.5. BASIS AND DIMENSION
Basis: A subset of a vector space is said to be a basis of if it has the
following properties:

i) is linearly independent
ii) spans

Examples

1) Show that the vectors


is a basis of .

Solution: Let be a subset of . Then

i) If there exist scalars such that

is linearly independent
ii) Any vector can expressed as a linear combination of
the vectors in . That is,

Thus, spans .Therefore, is a basis of .


Note:
i) The set of vectors is said to be a standard basis of
ii) The set of vectors is a standard basis of
iii) An infinite set may also be a basis of a vector space
iv) A vector space may have more than one basis. Notice that each of the following set of
vectors forms a basis of the vector space :



2) Let be a vector space whose elements are matrices. Then the
following set with six matrices forms a basis of :

3) Show that the set is a basis for .

Solution: If there exist scalars such that

Equating the like coefficients on both sides we get

; ;

Solving we have (do it!!). Thus, the given set of


vectors is linearly independent.

Let be an arbitrary element of . If there exist scalars such


that

Equating the like coefficients on both sides we get

; ;

Solving we have

; ; (do it !!)

Thus, the given set of vectors spans .

Therefore, the set forms a basis for .

Dimension of a vector space


If be a vector space which consists a basis with elements, then the
dimension of , denoted by , is .

Example: Since the set of vectors is a standard basis with


elements of , we have .
Note:

i) If the dimension of the vector space is finite, say , then is said to be


Finite Dimensional Vector Space (FDVS) or -dimensional vector space.
ii) The dimension of zero vector space is said to be zero.
iii) Suppose a vector space doesn’t have a finite basis or an infinite basis,
then is said to be of Infinite dimension or .

THEOREM 1: Let be a basis of a vector space of finite


dimension . If the set consisting of more than vectors
in , then prove that is linearly dependent.

(OR)

Prove that every set of or more vectors in an -dimensional vector space is


linearly dependent.

Proof: To show that is linearly dependent, it is enough to show that there exist
scalars , not all zero, such that

Since is a basis of each of the vectors in can be expressed as a linear


combination of the vectors in . That is,

Substituting these values in (1) and rearranging, we get

Since is basis and hence are linearly independent, we have


Notice that the above system of homogeneous linear equations has unknowns
and linear equations. Since , that is, the number of unknowns is greater
than the number of equations, the above system of equations has infinite number
of solutions. Thus, there exist nonzero scalars in such that

Therefore, the set is linearly dependent. Hence be proved.

Note: The largest linearly independent subset of a finite dimensional vector space
of dimension is a basis.

THEOREM 2: Let be a finite dimensional vector space .Then any two bases
of have the same number of elements.

THEOREM 3: Let be a basis for a vector space Then each


vector in can be expressed uniquely as a linear combination of these vectors.

LEMMA 1: Let be a linearly independent subset of a vector space . Suppose


is a vector in which is not in the subspace spanned by . Then the set obtained
by adjoining to , that is, is linearly independent.

THEOREM 4: Basis Extension Theorem


If is a subspace of finite-dimensional vector space , then prove that every
linearly independent subset of is finite and is part of a basis for .

Proof: Suppose is a linearly independent subset of . If is a linearly


independent subset of containing , then is also a linearly independent subset
of . Since is finite-dimensional, contains no more than elements.

We now extend S0 to a basis for W. If S0 spans W, then clearly S0 is basis for W.


If does not span , using Lemma 1 to find a vector β1 in such that the set
β is linearly independent. If spans , then is an extended basis
of which contains .

If not, again using Lemma 1 to find a vector β2 in such that β


β β is linearly independent. If spans , then is an extended basis of
which contains .

By continuing in this way (not more than steps), then we have a linearly
independent set , which is an extended basis for or is
a part of the basis of .

Note: If is a subspace of a vector space , then any linearly independent


set of contains atmost elements only.

Suppose that a vector space is known to be a dimension The following theorem


tells us that we have to check either linear independence or spanning conditions
to see if a given set is a basis.

THEOREM 5: Let be a vector space of dimension

a) If is a set of linearly independent vectors in , then is a


basis for
b) If is a set of vectors that spans then is a basis for

Example: Show that the set is a basis for .

Solution: Notice that is a -dimensional vector space. To prove the set of


vectors , it is enough to show that is either linear
independent or spans .

If there exist scalars such that

Equating the like coefficients on both sides we get

; ;
Solving we have (do it!!). Thus, is linearly
independent and hence it is a basis of .

THEOREM-6: Let be an - dimensional vector space and be a subspace


of . Then prove that is a finite dimensional vector space with .
Proof:
Given . Then any set with more than vectors is linearly dependent in
. Since is a subspace of , any linearly independent set of will
contain at most vectors.
That is, Let be the largest linearly independent set of , with
, then there exist scalars such that
, where
To prove that , it enough to show that is a basis of .
For any vector , consider the set .
Since is the largest linearly independent set, is linearly dependent in .
Thus, there exist scalars , not all zero, such that

If , then
From is linearly independent, which is a contradiction and hence .
From , we have

spans .
Thus, is a basis of with dimension .
This shows that . That is, .
Hence be proved.

THEOREM 7: Let and are two finite dimensional subspaces of a vector


space . Then prove that is finite dimensional subspace and

Proof:
Given and be two finite dimensional subspaces of a vector space .
is also a finite dimensional subspace of .
Let and be a basis of .
Then and .
Since is linear independent and , we can extend to form a basis of .
Let be a basis of and
Since is linear independent and , we can extend to form a basis of .
Let be a basis of and

We now prove that is basis


of .
If there exist scalars , is a basis of
and is a basis of , we have

linear combination of the vectors in

Also,

(since is linear independent)

linear combination of the vectors in

Thus, the vector in can be expressed as a linear


combination of the vectors in as basis of . That is,
linear combination of the vectors in

As is linear independent, we have

… (3)

Substituting in (2), we have

linear combination of the vectors in

As is linear independent, we have

… (4)

From (3) and (4), we have the relation

This gives is linearly independent.

Now, for any vector , we have , where and

linear combination of the vectors in

This gives spans

Therefore, is a basis of and

From and , we have is finite dimensional and

Hence be proved.
Problem 1: Show that the set 1,0, −1 , 1,1,1 , 1,2,4 is a
basis for ℝ3 .

Solution: Let us first show that the set spans ℝ3 .

Let (𝑥1 , 𝑥2 , 𝑥3 ) be an arbitrary element of ℝ3 .

We try to find scalars 𝑎1 , 𝑎2 , 𝑎3 such that 𝑥1 , 𝑥2 , 𝑥3 =


𝑎1 1,0, −1 + 𝑎2 1,1,1 + 𝑎3 1,2,4 . This identity leads to the
system of equations.
𝑎1 + 𝑎2 + 𝑎3 = 𝑥1

𝑎2 + 2𝑎3 = 𝑥2

−𝑎1 + 𝑎2 + 4𝑎3 = 𝑥3
This system of equations has the solution

𝑎1 = 2𝑥1 − 3𝑥2 + 𝑥3

𝑎2 = −2𝑥1 + 5𝑥2 − 2𝑥3

𝑎3 = 𝑥1 − 2𝑥2 + 𝑥3
Thus the set spans the space. We now show that the set is
linearly independent.

Consider the identity


𝑏1 1,0, −1 + 𝑏2 1,1,1 + 𝑏3 1,2,4 = (0,0,0)

This identity leads to the system of equations.


𝑏1 + 𝑏2 + 𝑏3 = 0

𝑏2 + 2𝑏3 = 0

−𝑏1 + 𝑏2 + 4𝑏3 = 0
This system has the unique solution 𝑏1 = 0, 𝑏2 = 0, and 𝑏3 =
0. Thus the set is linearly independent.

Therefore 1,0, −1 , 1,1,1 , 1,2,4 forms a basis for ℝ3 .


Problem 2: Prove that the set 1,3, −1 , 2,1,0 , 4,2,1 is a
basis for ℝ3

Solution: The dimension of ℝ3 is three. Thus a basis of


ℝ3 consists of three vectors. We have the correct number of
vectors for a basis.

Normally, we would have to show that this set is linearly


independent and that it spans ℝ3

Since ℝ3 is finite dimensional vector space, we need to


check only one of these two conditions. Let us check for
linear independence. We get 𝑐1 1,3, −1 + 𝑐2 2,1,0 +
𝑐3 4,2,1 = 0,0,0 . This identity leads to the system of
equations.
𝑐1 + 2𝑐2 + 4𝑐3 = 0
3𝑐1 + 𝑐2 + 2𝑐3 = 0

−𝑐1 + 4𝑐3 = 0

This system has unique solution 𝑐1 = 0, 𝑐2 = 0, 𝑐3 = 0


Thus the vectors are linearly independent. The set
1,3, −1 , 2,1,0 , 4,2,1 is therefore a basis for ℝ3 .
Problem 3: State (with a brief explanation) whether the
following statements are true or false.

(a) The vectors (1, 2), (-1, 3), (5, 2) are linearly dependent in
ℝ 2.

(b) The vectors (1, 0, 0), (0, 2, 0), (1, 2, 0) span ℝ3.

(c) {(1, 0, 2), (0, 1, -3)} is a basis for the subspace of ℝ3


consisting of vectors of the form (a, b, 2a-3b).

(d) Any set of two vectors can be used to generate a two-


dimensional subspace of ℝ3.
Solution:

(a) True: The dimension of ℝ2 is two. Thus any three


vectors are linearly dependent.

(b) False: The three vectors are linearly dependent. Thus


they cannot span a three-dimensional space.

(c) True: The vectors span the subspace since

(a, b, 2a-3b) = a(1, 0, 2) + b(0, 1, -3)


The vectors are also linearly independent since they are not
collinear.

(d) False: The two vectors must be linearly independent.


Exercise
1. Prove that the subspace of ℝ3 generated by the vectors
−1,2,1 , 2, −1,0 , and 1,4,3 is a two dimensional
subspace of ℝ3 and give a basis for this subspace.
2. Find a basis for ℝ3 that includes the vectors 1,1,1 and
1,0, −2 .
3. Determine a basis for each of the following subspaces
of ℝ3 . Give the dimension of each subspace.
a) The set of vectors of the form 𝑎, 𝑎, 𝑏 .
b) The set of vectors of the form 𝑎, 𝑏, 𝑎 + 𝑏
c) The set of vectors of the form 𝑎, 𝑏, 𝑐 , where 𝑎 + 𝑏 + 𝑐 =
0.
4. Which of the following sets of vectors are bases for ℝ2?

(a) {(3, 1), (2, 1)} (b) {(1, −3), (−2, 6)}

5. Which of the following sets are bases for ℝ3?

(a) {(1, −1, 2), (2, 0, 1), (3, 0, 0)}

(b) {(2, 1, 0), (−1, 1, 1), (3, 3, 1)}

6. Prove that the vector (1, 2, -1) lies in the two


dimensional subspace of ℝ3 generated by the vectors
(1, 3, 1) and (1, 4, 3).
7. Let {𝑣1 , 𝑣2 } be a basis for a vector space V. Show that
the set of vectors {𝑢1 , 𝑢2 }, where 𝑢1 = 𝑣1 + 𝑣2 , 𝑢2 = 𝑣1 −
𝑣2 , is also a basis for V.
8. Let V be a vector space of dimension n. Prove that no
set of n - 1 vectors can span V.
9. Let V be a vector space, and let W be a subspace of V.
If dim (V) = n and dim (W) = m, prove that m ≤ n.
Answers
1. { −1,2,1 , (2, −1,0)} is a basis.
2. { 1,1,1 , 1,0, −2 , (1,0,0)}.
3. a) Basis = { 1,1,0 , (0,0,1)}, dimension = 2.
b) Basis = { 1,0,1 , (0,1,1)}, dimension = 2.
c) Basis = { 1,0, −1 , (0,1, −1)}, dimension = 2.
4. (a) Basis (b) Not a basis
5. (a) Basis (b) Not a basis
6. 1,2, −1 = 2 1,3,1 − (1,4,3).
1.6
LINEAR TRANSFORMATIONS
A vector space has two operations defined on it, namely,
addition and scalar multiplication. Linear transformations
between vectors spaces are those functions that preserve
these linear structures in the following sense.
DEFINITION: Let 𝑈 and 𝑉 be vector spaces. Let 𝑢 and 𝑣 be
vectors in 𝑈 and let 𝑐 be a scalar. A function 𝑇: 𝑈 → 𝑉 is said
to be linear transformation if
𝑇 𝑢+𝑣 =𝑇 𝑢 +𝑇 𝑣
𝑇 𝑐𝑢 = 𝑐𝑇 𝑢
The first condition implies that ′𝑇′ maps the sum of two
vectors into the sum of images of those vectors. The second
condition implies that ′𝑇′ maps the scalar multiple of a
vector into the same scalar multiple of the image. Thus the
operations of addition and scalar multiplication are
preserved under linear transformation.
THEOREM: Let V be a finite-dimensional vector space over
the field F and let 𝛼1 , … , 𝛼𝑛 be ordered basis for V. Let W
be a vector space over the same field F and let 𝛽1 , … , 𝛽𝑛 be
any vectors in W. Then there is precisely one linear
transformation T from V into W such that
T(𝛼𝑗 ) = 𝛽𝑗 j= 1,…,n.
Proof: To prove there is some linear transformation T with
T(𝛼𝑗 ) = 𝛽𝑗 we proceed as follows. Given 𝛼 in V, there is a
unique n-tuple (𝑥1 , … , 𝑥𝑛 ) such that
𝛼 = 𝑥1 𝛼1 + ⋯ + 𝑥𝑛 𝛼𝑛 .
For this vector 𝛼 we define
𝑇(𝛼) = 𝑥1 𝛽1 + ⋯ + 𝑥𝑛 𝛽𝑛 .
Then T is a well-defined rule for associating with each
vector 𝛼 in V a vector 𝑇(𝛼) in W. From the definition it is
clear that 𝑇(𝛼𝑗 ) = 𝛽𝑗 for each j. To see that T is linear, let

𝛽 = 𝑦1 𝛼1 + ⋯ + 𝑦𝑛 𝛼𝑛
be in V and let c be any scalar. Now
𝑐𝛼 + 𝛽 = 𝑐𝑥1 + 𝑦1 𝛼1 + ⋯ + 𝑐𝑥𝑛 + 𝑦𝑛 𝛼𝑛
and so by definition
𝑇 𝑐𝛼 + 𝛽 = 𝑐𝑥1 + 𝑦1 𝛽1 + ⋯ + 𝑐𝑥𝑛 + 𝑦𝑛 𝛽𝑛
on the other hand,
𝑛 𝑛

𝑐 𝑇(𝛼) + 𝑇(𝛽) = 𝑐 𝑥𝑖 𝛽𝑖 + 𝑦𝑖 𝛽𝑖
𝑖=1 𝑖=1
𝑛

= (𝑐𝑥𝑖 + 𝑦𝑖 ) 𝛽𝑖
𝑖=1

and thus 𝑇 𝑐𝛼 + 𝛽 = 𝑐 𝑇(𝛼) + 𝑇(𝛽).


If U is a linear transformation from V into W with
𝑈(𝛼𝑗 ) = 𝛽𝑗 , 𝑗 = 1, … , 𝑛 , then for the vector 𝛼 = 𝑛𝑖=1 𝑥𝑖 𝛼𝑖 we
have
𝑛

𝑈(𝛼) = 𝑈 𝑥𝑖 𝛼𝑖
𝑖=1
𝑛

= 𝑥𝑖 (𝑈(𝛼𝑖 ))
𝑖=1
𝑛

= 𝑥𝑖 𝛽𝑖
𝑖=1

so that U is exactly the rule T which we defined above.


This shows that the linear transformation T with 𝑇(𝛼𝑖 ) = 𝛽𝑖
is unique.
The following theorem shows that any linear
transformation maps the zero vector of the domain vector
space to the zero vector of the co-domain vector space.
THEOREM: Let 𝑇: 𝑈 → 𝑉 be a linear transformation. Let 0𝑈
and 0𝑉 be the zero vectors of 𝑈 and 𝑉. Then 𝑇 0𝑈 = 0𝑉 .

That is, a linear transformation maps a zero vector into a


zero vector.
Proof: let 𝑢 be a vector in 𝑈 and let 𝑇 𝑢 = 𝑣

𝑇 0𝑈 = 𝑇 0 𝑢 = 0 𝑇 𝑢 = 0𝑣 = 0𝑉 .
DEFINITION: Let 𝑇: 𝑈 → 𝑉 be a linear transformation. The
set of vectors in 𝑈 that are mapped into the zero vector of 𝑉
is called the kernel of 𝑇. The kernel is denoted by 𝑘𝑒𝑟 𝑇 .
The set of vectors in 𝑉 that are images of vectors in 𝑈
is called the range of 𝑇. The range is denoted by 𝑟𝑎𝑛𝑔𝑒 𝑇 .
We illustrate these sets in the following figure.

U V

O
Kernel

𝑇→
All vectors in 𝑈 that are mapped into 0.

U V

range

𝑇→
All vectors in 𝑉 that are images of vectors in 𝑈.

Whenever we introduce sets in linear algebra, we are


interested in knowing whether they are vector spaces or
not. We now find that the kernel and range are indeed
vector spaces.
THEREM: Let 𝑇: 𝑈 → 𝑉 be a linear transformation.

a) The kernel of ′𝑇 ′ is a subspace of 𝑈


b) The range of ′𝑇 ′ is a subspace of 𝑉.
Proof: From the previous theorem, we know that the kernel
is non empty since it contains the zero vector of 𝑈.
To prove that the kernel is a subspace of 𝑈, we show
that it is closed under addition and scalar multiplication.
First we prove closure under addition, Let 𝑢1 , 𝑢2 ∈ ker 𝑇 .
Then 𝑇 𝑢1 = 𝑇 𝑢2 = 0.
Now 𝑇 𝑢1 + 𝑢2 = 𝑇 𝑢1 + 𝑇 𝑢2 = 0 + 0 = 0.
Then vector 𝑢1 + 𝑢2 is mapped into 0. Thus 𝑢1 + 𝑢2 is in
ker 𝑇
Let us now show that ker⁡ (𝑇) is closed under scalar
multiplication. Let ′𝑐′ be a scalar.
𝑇 𝑐𝑢1 = 𝑐𝑇 𝑢1 = 𝑐0 = 0.
Thus 𝑐𝑢1 is in ker 𝑇 .
The kernel is closed under addition and under scalar
multiplication. It is a subspace of 𝑈.
(b) The previous theorem tells us that the range is non
empty since it contains the zero vector of 𝑉.
To prove that the range is a subspace of 𝑉, we show
that it is closed under addition and scalar multiplication.
Let 𝑣1 and 𝑣2 be elements of 𝑟𝑎𝑛𝑔𝑒 𝑇 . Thus ∃ vectors 𝑢1 and
𝑢2 in the domain 𝑈 such that
𝑇 𝑢1 = 𝑣1 and 𝑇 𝑢2 = 𝑣2
Now 𝑇 𝑢1 + 𝑢2 = 𝑇 𝑢1 + 𝑇 𝑢2 = 𝑣1 + 𝑣2 . The vector 𝑣1 + 𝑣2
is the image of 𝑢1 + 𝑢2 . Thus 𝑣1 + 𝑣2 is in the range.
Let ′𝑐′ be a scalar. Then 𝑇 𝑐𝑢1 = 𝑐𝑇 𝑢1 = 𝑐𝑣1
The vector 𝑐𝑣1 is the image of 𝑐𝑢1 . Thus 𝑐𝑣1 is in the range.
The range is closed under addition and under scalar
multiplication. It is a subspace of 𝑉.
The following theorem gives an important relationship
between the “sizes” of the kernel and the range of a linear
transformation.
THEOREM: Let 𝑇: 𝑈 → 𝑉 be a linear transformation. Then
𝐷𝑖𝑚 ker 𝑇 + dim 𝑟𝑎𝑛𝑔𝑒 𝑇 = dim 𝑑𝑜𝑚𝑎𝑖𝑛 𝑇

Proof: If ker 𝑇 = 𝑈, then 𝑟𝑎𝑛𝑔𝑒 𝑇 = 0 . Since the only


vector space with dimension 0 is 0 , we are done in this
case.
Suppose that ker 𝑇 ≠ 𝑈.
Let 𝑢1 , 𝑢2 , … , 𝑢𝑚 be a basis for ker 𝑇 . Add vectors 𝑢𝑚 +1 , … , 𝑢𝑛
to this set to get a basis 𝑢1 , 𝑢2 , … , 𝑢𝑛 for 𝑈.
We shall show that 𝑇(𝑢𝑚 +1 ), … , 𝑇(𝑢𝑛 ) form a basis for the
range, thus proving the theorem.
Let 𝑢 ∈ 𝑈.
Then we get scalars 𝑎1 , 𝑎2 , … . , 𝑎𝑛 such that 𝑢 = 𝑎1 𝑢1 + 𝑎2 𝑢2 +
⋯ + 𝑎𝑚 𝑢𝑚 + 𝑎𝑚 +1 𝑢𝑚 +1 + ⋯ + 𝑎𝑛 𝑢𝑛 .
Thus
𝑇 𝑢 = 𝑇 𝑎1 𝑢1 + 𝑎2 𝑢2 + ⋯ + 𝑎𝑚 𝑢𝑚 + 𝑎𝑚 +1 𝑢𝑚 +1 + ⋯ + 𝑎𝑛 𝑢𝑛
= 𝑎1 𝑇(𝑢1 ) + ⋯ + 𝑎𝑚 𝑇 𝑢𝑚 + 𝑎𝑚 +1 𝑇 𝑢𝑚 +1 + ⋯ + 𝑎𝑛 𝑇𝑢𝑛
= 𝑎𝑚 +1 𝑇 𝑢𝑚 +1 + ⋯ + 𝑎𝑛 𝑇𝑢𝑛 .
Since 𝑇 𝑢 represents an arbitrary vector in the range of 𝑇,
the vectors 𝑇 𝑢𝑚 +1 , … . 𝑇 𝑢𝑛 span the range.
It remains to prove that these vectors are linearly
independent. Consider the identity
𝑏𝑚 +1 𝑇 𝑢𝑚 +1 + ⋯ + 𝑏𝑛 𝑇 𝑢𝑛 = 0
⟹ 𝑇 𝑏𝑚+1 𝑢𝑚 +1 + ⋯ + 𝑏𝑛 𝑢𝑛 = 0
⟹ 𝑏𝑚+1 𝑢𝑚 +1 + ⋯ + 𝑏𝑛 𝑢𝑛 ∈ ker 𝑇 .
⟹ 𝑏𝑚 +1 𝑢𝑚+1 + ⋯ + 𝑏𝑛 𝑢𝑛 = 𝑐1 𝑢1 + ⋯ + 𝑐𝑚 𝑢𝑚
⟹ 𝑐1 𝑢1 + ⋯ + 𝑐𝑚 𝑢𝑚 − 𝑏𝑚 +1 𝑢𝑚 +1 − ⋯ − 𝑏𝑛 𝑢𝑛 = 𝑐
Since the vectors 𝑢1 , … . , 𝑢𝑚 , , 𝑢𝑚 +1 , . . , 𝑢𝑛 are a basis, they are
linearly independent. Therefore, the coefficients are all zero.
𝑐1 = 0, 𝑐𝑚 = 0, 𝑏𝑚 +1 = 0, … , 𝑏𝑛 = 0. So 𝑇 𝑢𝑚 +1 , … . , 𝑇(𝑢𝑛 ) are
linearly independent.
Therefore the set of vectors 𝑇 𝑢𝑚 +1 , … . , 𝑇 𝑢𝑛 is a basis for
the range.
TERMINOLOGY:
The kernel of a linear mapping ′𝑇′ is often called the null
space. 𝐷𝑖𝑚 ker⁡
(𝑇) is called the nullity, and dim 𝑟𝑎𝑛𝑔𝑒(𝑇) is
called the rank of the transformation. The previous
theorem is often referred to as the rank/nullity theorem
and written in the following form. 𝑅𝑎𝑛𝑘 𝑇 + 𝑛𝑢𝑙𝑙𝑖𝑡𝑦 𝑇 =
dim 𝑑𝑜𝑚𝑎𝑖𝑛(𝑇).
1.6. LINEAR TRANFORMATIONS
THEOREM 1: Let be a finite dimensional vector space with an ordered
basis . Let be a vector space and be
a set of vectors in . Then prove that there exists a unique linear transformation
such that .

Proof:
Existence of the mapping

Let be any vector. Since is a basis of , can be expressed as linear


combination of the vectors in , that is, there exists unique scalars
n
such that  aii
i 1

For this vector , we define

n
 ai i
i 1

Then each vector in there exists in , that is, for each . Thus,
is a well defined mapping.

To show that is a linear transformation


n
Let be any vector. Then  bii , where
i 1
. For , we have

n n n n n
 aii  bii  caii  bii   cai  bi i
i 1 i 1 i 1 i 1 i 1
n
Now, using the definition  ai i , we have
i 1

n n
  cai  bi T i     cai  bi  i
i 1 i 1

On the other hand,

Therefore, is a well defined linear transformation.

To show that is unique

Assume that be another linear transformation such that

n
For  aii , we have
i 1

n  n n
  ai i    aiT '  i    ai i (using definition)
 i 1  i 1 i 1

This shows that and hence is unique.

THEOREM 2: Let be a linear transformation. Let and be the


zero vectors of and respectively. Then .

That is, a linear transformation maps a zero vector into a zero vector.
Kernel of a linear transformation
Let and are two vector spaces, and is a linear
transformation. Then kernel of T, denoted by , is the set of vectors in
that map into the zero vector in , that is,

Range (or Image) of a linear transformation


Let and are two vector spaces, and is a linear
transformation. Then range of T, denoted by , is the set of vectors in which
are images of the vectors in , that is,

THEOREM 3: Let and are two vector spaces, and is a


linear transformation. Then prove that

a) is a subspace of .
b) is a subspace of .

Proof: We have the following result:


Let and are two vector spaces and is a linear transformation.
Then , where and are zero vectors in and respectively.

a) From the above result, it clear that is nonempty subset of since


it contains a zero vector.

To prove that is a subspace of , it enough to show that for any


and such that .

Let . Then by its definition, we have

Since is a linear transformation, we have

i.e.,

i.e., For any and such that .


Therefore, is a subspace of .

b) From the above result, it clear that is nonempty subset of since it


contains a zero vector.

To prove that is a subspace of , it is enough to show that for any


and such that .

Let . Then by its definition there exist some vectors


such that and

Since is a linear transformation, we have

i.e.,

i.e., For any and such that .

Therefore, is a subspace of .

Hence be proved.

Rank and Nullity of a Linear Transformation


Let and are two vector spaces and is a linear
transformation. Then

i) The rank of , denoted by , is defined as the dimension of the range of .


That is,
ii) The Nullity of , denoted by , is defined as the dimension of the kernel of .
That is, .

Theorem 4: Rank and Nullity Theorem


Let and are two vector spaces and is a linear transformation.
Suppose that is a finite-dimensional, then prove that ,
that is, .
Proof:
Since is a finite-dimensional vector space, and
is a linear transformation, is also a finite-dimensional subspace of .

Let …

Let be a basis set of . Then by the definition


of , we have

, where

As is linear independent, we can extend to form a basis of . That is,

Let be the extended basis of . Then

We now show that the set of all image elements of additional vectors
is a basis for so that .

To prove that is linear independent, it is enough to show that for any scalars
such that ,
where .

Now,

Since is a linear transformation, we have

By the definition of , we have

Since is a basis of , we can express the vector


as a linear combination of the vectors in .

That is, for any scalars such that


Linear combination of the vectors in

As is linearly independent, we have

Thus, is linearly independent.

To prove that spans , it is enough to show that for any vector ,


can be expressed as a linear combination of the vectors in .

Now, for some scalars such that

Since is a linear transformation, we have

(from (2))

Linear combination of the vectors in

Thus, spans

Therefore, is a basis of so that …

From and , we have

Hence be proved.
Problem 1: Prove that the following transformation
𝑇: 𝑅 2 → 𝑅 2 is linear. T(x,y) = (2x, x+y)
Solution: We first show that T preserves addition. Let
(x1,y1) and (x2,y2) be elements of 𝑅 2 . Then
T((x1,y1)+ (x2,y2)) = T (x1+x2,y1+y2) by vector addition
= (2x1+2x2, x1+x2+y1+y2) by definition of T
= (2x1, x1+ y1)+( 2x2, x2+ y2) by vector addition
= T(x1,y1)+T (x2,y2) by definition of T
Thus T preserves vector addition.
We now show that T preserves scalar multiplication. Let c
be a scalar.
T(c(x1,y1))=T(cx1,cy1) by scalar multiplication of a vector
=(2c x1, cx1+c y1) by definition of T
=c(2x1, x1+ y1) by scalar multiplication of a vector
=cT (x1, y1) by definition of T
Thus T preserves scalar multiplication. T is linear.
Problem 2: Let 𝑃𝑛 be the vector space of real polynomial
functions of degree ≤n. Show that the following
transformation 𝑇: 𝑃2 → 𝑃1 is linear.
T (ax2+bx+c)= (a+b)x+c
Solution: Let ax2+bx+c and px2+qx+r be arbitrary
elements of P2. Then
T ((ax2+bx+c)+(px2+qx+r)) = T ((a+p)x2+(b+q)x+(c+r) by vector
addition
= (a+p+b+q)x+(c+r) by definition of T
=(a+b)x+c+(p+q)x+r
= T (ax2+bx+c)+ T(px2+qx+c ) by definition of T
Thus T preserves addition
We now show that T preserves scalar multiplication. Let k
be a scalar.
T(k(ax2+bx+c))=T(kax2+kbx+kc) by scalar multiplication
= (ka+kb)x+kc by definition of T
=k((a+b)x+c)
=kT(ax2+bx+c) by definition of T
T preserves scalar multiplication. Therefore, T is a linear
transformation.
Problem 3: Find the kernel and range of the linear
operator T(x,y,z)=(x,y,0)
Solution: Since the linear operator T maps R3 into R3, the
kernel and range will both be subspaces of R3.
Kernel: ker(T) is the subset that is mapped into (0,0,0). We
see that T(x,y,z) = (x,y,0)
= (0,0,0), if x=0,y=0
Thus ker(T) is the set of all vectors of the form (0,0,z). We
express this as ker(T) ={(0,0,z)}
Geometrically, ker(T) is the set of all vectors that lie on the
z-axis.
Range: The range of T is the set of all vectors of form
(x,y,0). Thus range (T) = {(x,y,0)}
Range (T) is the set of all vectors that lie in the x-y plane.
Exercise
1. Prove that the following transformations 𝑇: 𝑅2 → 𝑅 are not
linear.

(a) T(x, y) = y2

(b) T(x, y) = x-3

2. Determine the kernel and range of each of the following


transformations. Show that dim ker(T)+dim range (T)=dim
domain (T) for each transformation.

(a) T(x, y, z) = (x, 0, 0) of 𝑹3 → 𝑹3

(b) T(x, y, z) = (x + y, z) of 𝑹3 → 𝑹2

(c) T(x, y) = (3x, x-y, y) of 𝑹2 → 𝑹3

3. Let 𝑇: 𝑈 → 𝑉 be a linear mapping. Let v be a nonzero


vector in V. Let W be the set of vectors in U such tat T(w)=v.
Is W a subspace of U?

4. Let 𝑇: 𝑈 → 𝑉 be a linear transformation. Prove that


dim range (T) = dim domain (T)

if and only if T is one-to-one.

5 Let 𝑇: 𝑈 → 𝑉 be a linear transformation. Prove that T is


one –to-one if and only if it preserves linear independence.
Answers
2. (a) The kernel is the set {(0,r,s)} and the range is the set
{(a,0,0)}. Dim ker(T) =2, dim range(T) =1, and dim
domain(T)= 3, so dim ker(T)+ dim range(T) =dim domain(T).

(b) The kernel is the set {(r,-r,0)} and the range is R2. Dim
ker(T) =1, dim range(T)=2, and dim domain(T)= 3, so dim
ker(T)+ dim range(T) = dim domain(T).
(c) The kernel is the zero vector and the range is the set
{(3a,a-b,b)}. Dim ker(T) =0, dim range(T)= 2, so dim ker(T)+
dim range(T) = dim domain(T).

3. This set is not a subspace because it does not contain


the zero vector.
4. T is one-to-one if and only if ker(T) is the zero vector if
and only if dim ker(T) = 0 if and only if dim range (T)=dim
domain(T).
1.7

Matrix Representations of Linear Transformation

In this module we introduce a way of representing a


linear transformation between general vector spaces by a
matrix. We lead up to this discussion by looking at the
information below that is necessary to represent a linear
transformation by a matrix.
Definition: Let 𝑼 be a vector space with basis 𝑩 =
𝑢1 , . . . , 𝑢𝑛 and let 𝒖 be a vector in 𝑈. We know that there
exist unique scalars 𝑎1 , . . . . , 𝑎𝑛 such that
𝒖 = 𝑎1 𝒖1 + . . . . + 𝑎𝑛 𝒖𝑛

 a1 
The column vector 𝑢𝑩 =
   is called the coordinate
 
 an 
vector of u relative to this basis. The scalars 𝑎1 , . . . . , 𝑎𝑛
are called the coordinates of 𝒖 relative to this basis.
Note: We will use a column vector from coordinate vectors
rather than row vectors. The theory develops most
smoothly with this convention.
Example: Find the coordinate vector of 𝒖 = (4,5) relative to
the following bases 𝑩 and 𝑩′ of 𝑅 2 :
(a) The standard basis, 𝐵 = 1,0 , 0,1 and
(b) 𝐵′ = { 2,1 , −1,1 }.

Solution:
(a) By observation, we see that
(4, 5) = 4(1, 0) + 5(0, 1)

4
Thus 𝒖𝐵 =   . The given representation of u is, in
5
fact, relative to the standard basis.
(b) Let us now find the coordinate vector of u relative
to 𝐵′, a basis that is not the standard basis. Let
(4, 5) = 𝑎1 (2, 1) + 𝑎2 (-1, 1)
Thus

(4, 5) = (2𝑎1 ,𝑎1 ) + (-𝑎2 ,𝑎2 )


(4, 5) = (2a1  a2 , a1  a2 )

Comparing components leads to the following system of


equations.

2a1  a2  4
a1  a2  5
This system has the unique solution
𝑎1 = 3 , 𝑎2 = 2

3
Thus 𝑢𝐵 ′ =  
2
Definition: Let 𝑩 = 𝑢1 , . . . , 𝑢𝑛 and 𝑩′ = 𝑢′1 , . . . , 𝑢′ 𝑛 be
bases for a vector space 𝑈. Let the coordinate vectors of
𝑢1 , . . . , 𝑢𝑛 relative to the basis 𝑩′ = 𝑢′1 , . . . , 𝑢′ 𝑛 be
(𝑢1 )𝐵 ′ , . . . . , (𝑢𝑛 )𝐵 ′ . The matrix 𝑃, having these vectors as
columns , plays a central role in our discussion. It is called
the transition matrix from the basis 𝑩 to the basis 𝑩′.
Transition matrix 𝑷 = [(𝑢1 )𝐵 ′ , . . . . , (𝑢𝑛 )𝐵 ′ ].

Theorem: Let 𝑩 = 𝑢1 , . . . , 𝑢𝑛 and 𝑩′ = 𝑢′1 , . . . , 𝑢′ 𝑛 be


bases for a vector space 𝑈 . If u is a vector in 𝑈 having
coordinate vectors 𝒖𝐵 and 𝒖𝐵 ′ relative to these bases, then

𝒖𝐵 ′ = 𝑃𝒖𝐵

where 𝑃 is the transition matrix from 𝐵 to 𝐵′:


𝑃 = [ 𝑢1 𝐵′ , . . . . , 𝑢𝑛 𝐵 ′ ].

Proof: Since {𝑢′1 ,. . . . , 𝑢𝑛′ } is a basis for 𝑈, each of the


vectors 𝑢1 , . . . , 𝑢𝑛 can be expressed as a linear
combination of these vectors.
Let
𝑢1 = 𝑐11 𝑢′1 + . . . +𝑐𝑛1 𝑢′𝑛

𝑢𝑛 = 𝑐1𝑛 𝑢′1 + . . . +𝑐𝑛𝑛 𝑢′𝑛
If u= 𝑎1 𝑢1 + . . . . + 𝑎𝑛 𝑢𝑛 , we get
𝒖 = 𝑎1 𝑢1 + . . . . + 𝑎𝑛 𝑢𝑛

= 𝑎1 𝑐11 𝑢′1 + . . . +𝑐𝑛1 𝑢′ 𝑛 + . . . . + 𝑎𝑛 𝑐1𝑛 𝑢′1 + . . . +𝑐𝑛𝑛 𝑢′ 𝑛


= (𝑎1 𝑐11 + . . . +𝑎𝑛 𝑐1𝑛 )𝑢′1 + . . . . + (𝑎1 𝑐𝑛1 + . . . +𝑎𝑛 𝑐𝑛𝑛 )𝑢′𝑛
The coordinate vector of u relative to 𝐵′ can therefore be
written

 (a1c11  . . .  anc1n )   c11. . .c1n   a1 


𝒖𝐵 ′ 
      
    
(a1cn1  . . .  ancnn )  cn1. . .cnn   an 

= [ 𝑢1 𝐵′ , . . . . , 𝑢𝑛 𝐵 ′ ]𝒖𝐵

proving the theorem.


Example: Consider the bases 𝐵= 1,2 , 3, −1 and
3
𝐵′ = { 1,0 , 0,1 } of 𝑅 2 . If u is a vector such that 𝒖𝐵    ,
4
find 𝒖𝐵 ′ .

Solution: We express the vectors of 𝐵 in terms of the


vectors of 𝐵′ to get the transition matrix.
1,2 = 1 1,0 + 2(0,1)
3, −1 = 3 1,0 − 1(0,1)

1  3
The coordinate vectors of (1,2) and (3, −1) are   and  
2  1
.
The transition matrix 𝑃 is thus
1 3 
P 
 2 1
(Observer that the columns of 𝑃 are the vectors of the
basis.) We get

1 3   3  15
𝒖𝐵 ′       
 2 1  4   2 

Let f be any function. We know that f is defined if its


effect on every element of the domain is known. This is
usually done by means of an equation that gives that effect
of the function on an arbitrary element in the domain. For
example, consider the function f defined by

f x  x 3

The domain of f is x  3 . The above equation gives the effect


of f on every element in this interval. For example, f  7   2 .
Similarly, a linear transformation T is defined if its value at
every vector in the domain is known. However, unlike a
general function, we will see that if we know the effect of
the linear transformation on a finite subset of the domain
(a basis), it will be automatically defined on all elements of
the domain.
Theorem: Let T : U  V be a linear transformation. Let
{u1 ,......,un } be a basis for U . T is defined by its effect on the
base vectors, namely by T u1  ,.....T un  . The range of T is
spanned by T u1  ,.....T un  .

Thus, defining a linear transformation on a basis defined it


on the whole domain.

Proof: Let u be an element of U . Since u1 ,.....,un  is a basis


for U , there exist scalars a1 ,.....an such that

u  a1u1  .....  anun

The linearity of T gives

T u   T  a1u1  .......  anun 

= a1T u1   ......  anT un 

Therefore T  u  is known if T u1  ,.....T un  are known.

Further, T  u  may be interpreted to be an arbitrary element


in the range of T . It can be expressed as a linear
combination of T u1  ,.....T un  . Thus T u1  ,.....T un  spans the
range of T .
From now onwards, we will represent the elements of U
and V by coordinate vectors, and T by a matrix A that
defines a transformation of coordinate vectors. The matrix
A is constructed by finding the effect of T on basis vectors.
Theorem: Let U and V be vector spaces with bases
B  u1 ,.....un  and B  v1 ,.....,vm  . Let T : U  V be a linear
transformation. If u is a vector in U with image T  u  having
coordinate vectors a and b relative to these bases, then

b  Aa Where A  T u1 B ....T un B 

The Matrix 𝐴 thus defines a transformation of coordinate


vectors of 𝑈 in the “same way” as 𝑇 transforms the vectors
of 𝑈 . See the figure below. 𝐴 is called the matrix
representation of 𝑻 (or matrix of 𝑻) with respect the bases
𝐵 and 𝐵′ .

𝑢 . . . . . . . . . . 𝑇 . . . . . . . . . . .𝑇(𝑢)

Coordinate
Mapping

......... .........
a A b

Figure

Proof: Let 𝑢 = 𝑎1 𝑢1 + . . . +𝑎𝑛 𝑢𝑛 .


Using the linearity of 𝑇, we can write

𝑇(𝑢) = 𝑇(𝑎1 𝑢1 + . . . +𝑎𝑛 𝑢𝑛 )


= 𝑎1 𝑇(𝑢1 )+ . . . +𝑎𝑛 𝑇(𝑢𝑛 )

Let the effect of 𝑇 on the basis vectors of 𝑈 be

𝑇 𝑢1 = 𝑐11 𝑣1 + . . . + 𝑐1𝑚 𝑣𝑚
𝑇 𝑢2 = 𝑐21 𝑣1 + . . . + 𝑐2𝑚 𝑣𝑚
.
.
.
𝑇 𝑢𝑛 = 𝑐𝑛1 𝑣1 + . . . + 𝑐𝑛𝑚 𝑣𝑚

Thus

𝑇 𝑢 = 𝑎1 𝑐11 𝑣1 + . . . + 𝑐1𝑚 𝑣𝑚 + . . . +𝑎𝑛 (𝑐𝑛1 𝑣1 + . . . + 𝑐𝑛𝑚 𝑣𝑚 )

= 𝑎1 𝑐11 + . . . + 𝑎𝑛 𝑐𝑛1 𝑣1 + . . . +(𝑎1 𝑐1𝑚 + . . . + 𝑎𝑛 𝑐𝑛𝑚 )𝑣𝑚

The coordinate vector of 𝑇(𝑢) is therefore

𝑎1 𝑐11 + . . . + 𝑎𝑛 𝑐𝑛1 𝑐11 … 𝑐𝑛1 𝑎1


𝑏 = ⋮ = ⋮ ⋮ = 𝐴𝑎
𝑎1 𝑐1𝑚 + . . . + 𝑎𝑛 𝑐𝑛𝑚 𝑐1𝑚 … 𝑐𝑛𝑚 𝑎𝑛

proving the theorem.

Importance of Matrix Representation

The fact that every linear transformation can now be


represented by a matrix means that all the theoretical
mathematics of these vector spaces and their linear
transformation can be undertaken in terms of the vector
spaces 𝑅 𝑛 and matrices. A second reason is a
computational one. The elements of 𝑅 𝑛 and matrices can be
manipulated on computers. Thus general vector spaces
and their linear transformations can be discussed on
computers through these representations.

Relation between Matrix Representations

We have seen that the matrix representation of a linear


transformation depends upon the bases selected. When
linear transformations arise in applications, a goal is often
to determine a simple matrix representation. At this time
we discuss how matrix representations of linear operators
relative to different bases related. We remind the reader
that if A and B are square matrices of the same size, then 𝐵
is said to be similar to 𝐴 if there exists an invertible matrix
𝑃 such that

𝐵 = 𝑃−1 𝐴𝑃

The transformation of the matrix 𝐴 into the matrix 𝐵 in this


manner is called similarity transformation. We now find
the matrix representations of a linear operator relative to
two bases are similar matrices.

Theorem: Let 𝑈 be a vector space with bases 𝐵 and 𝐵′. Let


𝑃 be the transformation matrix from 𝐵′ to 𝐵. If 𝑇 is a linear
operator on 𝑈 , having matrix 𝐴 with respect to the first
basis and 𝐴′ with respect to the second basis , then

𝐴′ = 𝑃−1 𝐴𝑃
Proof: Consider a vector 𝑢 in 𝑈. Let its coordinate vector
relative to B and 𝐵′ be 𝑎 and 𝑎′. The coordinate vectors of
𝑇(𝑢) are 𝐴𝑎 and 𝐴′𝑎′. Since P is the transition matrix from 𝐵′
to 𝐵, we know that

𝑎 = 𝑃𝑎′ and 𝐴𝑎 = 𝑃(𝐴′ 𝑎′ )

This second equation may be rewritten

𝑃−1 𝐴𝑎 = 𝐴′𝑎′

Substituting 𝑎 = 𝑃𝑎′ into this equation gives

𝑃−1 𝐴𝑃𝑎′ = 𝐴′𝑎′

This effect of the matrices 𝑃−1 𝐴𝑃 and 𝐴′ as transformations


on an arbitrary coordinate vector 𝑎′ is the same. Thus these
matrices are equal.

Applications of Linear Transformation


A specific application of linear maps is for geometric
transformations, such as those performed in computer
graphics, where the translation, rotation and scaling of 2D
or 3D objects is performed by the use of a transformation
matrix. For Example:

1. Reflection with respect to x-axis:


 u    u  1 0   u1   u1 
L : R 2  R 2 , L  1    A 1        .
 u 2   u 2  0  1 u 2   u 2 
For example, the reflection for the triangle with vertices
 1, 4, 3, 1, 2, 6 is

  1    1  3   3   2   2 
L      , L      , L       .
  4    4  1   1  6   6

The plot is given below.

2, 6

 1, 4

3, 1

3,  1
 1,  4

2,  6

2. Reflection with respect to y   x :


 u    u   0  1  u1   u2 
L : R 2  R 2 , L  1    A 1         .
 u2   u2   1 0  u2    u1 
Thus, the reflection for the triangle with vertices
 1, 4, 3, 1, 2, 6 is

  1   4  3    1  2   6
L      , L      , L       .
  4    1   1   3  6   2

The plot is given below

2, 6

 1, 4

3, 1
 4, 1

 1,  3

 6,  2

3. Rotation:

  u1    u1  cos   sin    u1 
2

2

L : R  R , L     A     u 
 u 2   u 2   sin   cos    2 
For example, as   2,

cos 
A
   2   0
 sin   1
  cos    1
2
  0  .
 sin 2 2 

Thus, the rotation for the triangle with vertices 0, 0, 1, 0, 1, 1
is

 0  0  1 0 0


L       0  0, .
 0   1 0    

 1  0  1 1 0


L       0  1,
   
0 1 0    

and

 1  0  1 1  1
L       1   1 .
   
1 1 0    

The plot is given below.

0, 1
1, 1
 1, 1

0, 0 1, 0
Problem 1: Consider the linear transformation T : R3  R2
defined as follows on basis vectors of R 3 . Find T 1, 2,3.

T 1,0,0    3, 1 , T  0,1,0    2,1 , T  0,0,1   3,0

Solution: Since T is defined on basis vectors of R 3 , it is


defined on the whole space. To find, T 1, 2,3 , express the
vector 1, 2,3 as a linear combination of the basis vectors
and use the linearity of T .

T 1, 2,3  T 11,0,0   2  0,1,0   3 0,0,1  

 1T 1,0,0  , 2T  0,1,0   3T  0,0,1

 1 3, 1  22,1  3 3,0 

  8, 3
Problem 2: Let 𝑇: 𝑈 → 𝑉 be a linear transformation. T is
defined relative to bases 𝐵 = {𝑢1 , 𝑢2 , 𝑢3 } and 𝐵′ = 𝑣1 , 𝑣2 of 𝑈
and 𝑉 as follows

𝑇 𝑢1 = 2𝑣1 − 𝑣2
𝑇 𝑢2 = 3𝑣1 + 2𝑣2
𝑇 𝑢3 = 𝑣1 − 4𝑣2
Find the matrix representation of 𝑇 with respect to these
bases and use this matrix to determine the image of the
vector 𝑢 = 3𝑢1 + 2𝑢2 − 𝑢3 .

Solution: The coordinate vectors of 𝑇 𝑢1 , 𝑇 𝑢2 and 𝑇 𝑢3


are

2 3 1


 1 ,   and  
  2  4 
These vectors make up the columns of the matrix of 𝑇

 2 3 1 
A 
 1 2 4 
Let us now find the image of the vector 𝑢 = 3𝑢1 + 2𝑢2 − 𝑢3
using this matrix.
3
The coordinate vector of 𝑢 is a   2  .
 
 1
We get
3
 2 3 1    11
Aa    2   5
 1 2  4   1  
 
11
𝑇(𝑢) has coordinate vector   . Thus 𝑇 𝑢 = 11𝑣1 + 5𝑣2 .
5
Problem 3: Consider the linear transformation 𝑇: 𝑅 3 → 𝑅 2 ,
defined by 𝑇 𝑥, 𝑦, 𝑧 = (𝑥 + 𝑦, 2𝑧). Find the matrix of 𝑇 with
respect to the bases {𝑢1 , 𝑢2 , 𝑢3 } and {𝑢′1 , 𝑢′2 } of 𝑅 3 and 𝑅 2 ,
where
𝑢1 = 1,1,0 , 𝑢2 = 0,1,4 , 𝑢3 = 1,2,3 𝑎𝑛𝑑 𝑢′1 = 1,0 , 𝑢′2 = (0,2).

Use this matrix to find the image of the vector 𝑢 = (2,3,5)

Solution: We find the effect of 𝑇 on the basis vectors of 𝑅 3 .


T (u1 )  T (1,1,0)  (2,0)  2(1,0)  0(0,2)  2u '1  0u '2
T (u2 )  T (0,1,4)  (1,8)  1(1,0)  4(0,2)  1u '1  4u '2
T (u3 )  T (1,2,3)  (3,6)  3(1,0)  3(0,2)  3u '1  3u '2

The coordinate vector of T (u1 ) , T (u2 ) and T (u3 ) are thus


 2  1   3
,
0 4 , and 3 . These vectors from the columns of the
     
matrix of 𝑇.
 2 1 3
A 
 0 4 3
Let us now use A to find the image of the vector 𝑢 = 2,3,5 .
We determine the coordinate vector of 𝑢. It can be shown
that
𝑢 = 2,3,5 = 3 1,1,0 + 2 0,1,4 − 1,2,3
= 3𝑢1 + 2𝑢2 + (−1)𝑢3
3
The coordinate vector of 𝑢 is thus a   2  . The coordinate
 
 1
vector of 𝑇(𝑢) is
3
 2 1 3    5 
b  Aa     2   5 
 0 4 3   1  
 
Therefore, 𝑇 𝑢 = 5𝑢′1 + 5𝑢′2 = 5 1,0 + 5 0,2 = (5,10).
We can check this result directly using the definition
𝑇 𝑥, 𝑦, 𝑧 = (𝑥 + 2𝑦, 2𝑧).
For 𝑢 = 2,3,5 , this gives

𝑇 𝑢 = 𝑇 2,3,5 = 2 + 3,2 × 5 = (5,10).


Problem 4: Consider the linear operator 𝑇 𝑥, 𝑦 = 2𝑥, 𝑥 + 𝑦
on 𝑅 2 . Find the matrix of 𝑇 with respect to the standard
basis 𝐵 = 1,0 , 0,1 of 𝑅 2 . Use the transformation
𝐴′ = 𝑃−1 𝐴𝑃 to determine the matrix 𝐴′ with respect to the
basis 𝐵′ = 2,3 , 1, −1 .

Solution: The effect of 𝑇 on the vectors of the standard


basis is
𝑇 1,0 = 2,1 = 2 1,0 + 1(0,1)
𝑇 0,1 = 0,1 = 0 1,0 + 1(0,1)

The matrix of T relative to the standard basis is

2 0
A 
1 1 

We now find 𝑃, the transition matrix from 𝐵′ to 𝐵. Write the


vectors of 𝐵′ in terms of those of 𝐵.

−2,3 = −2 1,0 + 3(0,1)


1, −1 = 1 1,0 − (1(0,1)

The transition matrix is


 2 1 
P 
 3 1

Therefore
1
 2 1   2 0  2 1 
A '  P 1 AP    1 1   3 1
 3 1   
1 1   2 0  3 2 
  
3 2   1 1   10 6 

Exercise
1) Let 𝑇: 𝑈 → 𝑉 be a linear transformation. Let 𝑇 be
defined relative to bases {𝑢1 , 𝑢2 , 𝑢3 } and {𝑣1 , 𝑣2 , 𝑣3 } of 𝑈
and 𝑉 as follows:
𝑇 𝑢1 = 𝑣1 + 𝑣2 + 𝑣3
𝑇 𝑢2 = 3𝑣1 −2𝑣2
𝑇 𝑢3 = 𝑣1 + 2𝑣2 − 𝑣3 .
Find the matrix of 𝑇 with respect to these bases. Use
this matrix to find the image of the vector
𝑢 = 3𝑢1 + 2𝑢2 − 5𝑢3 .

2) Find the matrices of the following linear operators on


𝑅3 with respect to the standard basis of 𝑅3 . Use these
matrices to find the images of the vector −1,5,2
(a) 𝑇 𝑥, 𝑦, 𝑧 = 𝑥, 2𝑦, 3𝑧
(b) 𝑇 𝑥, 𝑦, 𝑧 = 𝑥, 0,0

3) Consider the linear transformation 𝑇: 𝑅3 → 𝑅2 defined


by 𝑇 𝑥, 𝑦, 𝑧 = (𝑥 − 𝑦, 𝑥 + 𝑧) . Find the matrix of 𝑇 with
respect to the bases {𝑢1 , 𝑢2 , 𝑢3 } and {𝑢′1 , 𝑢′2 } of 𝑅3 and
𝑅2 , where
𝑢1 = 1, −1,0 , 𝑢2 = (2,0,1)
𝑢3 = 1,2,1 and 𝑢′1 = −1,0 , 𝑢′2 = (0,1).
Use this matrix to find the image of the vector
𝑢 = 3, −4,0 .

4) Find the matrix of the differential operator 𝐷 with


respect to the basis {2𝑥 2 , 𝑥, −1} of 𝑃2 . Use this matrix to
find the image of 3𝑥 2 − 2𝑥 + 4.
5) Find the matrix of the following linear transformations
with respect to the basis 𝑥, 1 of 𝑃1 and {𝑥 2 , 𝑥, 1}of 𝑃2 .
(a) 𝑇(𝑎𝑥 2 + 𝑏𝑥 + 𝑐) = 𝑏 + 𝑐 𝑥 2 + 𝑏 − 𝑐 𝑥 of 𝑃2 into itself
(b) 𝑇 𝑎𝑥 + 𝑏 = 𝑏𝑥 2 + 𝑎𝑥 + 𝑏 of 𝑃1 into 𝑃2 .

6) Consider the linear operator 𝑇 𝑥, 𝑦 = (2𝑥, 𝑥 + 𝑦) on 𝑅2 .


Find the matrix of 𝑇 with respect to the standard basis
of 𝑅2 . Use a similarity transformation to then find the
matrix with respect to the basis 1,1 , 2,1 of 𝑅2 .

7) Let 𝑈, 𝑉 𝑎𝑛𝑑 𝑊 be vector spaces with bases


𝐵 = 𝑢1 , . . . , 𝑢𝑛 , 𝐵′ = 𝑣1 ,. . . . , 𝑣𝑚 , and
𝐵′′ = 𝑤1 ,. . . . , 𝑤𝑚 , be linear transformations. Let 𝑃
be the matrix of 𝑇 with respect to 𝐵 𝑎𝑛𝑑 𝐵′, and 𝑄 be
the matrix representation of 𝐿 with respect to 𝐵′ 𝑎𝑛𝑑 𝐵′′.
Prove that the matrix of 𝐿𝑜𝑇 with respect to 𝐵 𝑎𝑛𝑑 𝐵′′ is
𝑄𝑃.

8) Is it possible for two distinct linear transformations


𝑇: 𝑈 → 𝑉 and 𝐿: 𝑈 → 𝑉 to have the same matrix with
respect to bases 𝐵 𝑎𝑛𝑑 𝐵′ of 𝑈 𝑎𝑛𝑑 𝑉?
Answers

1 3 1 
1) 1 2 2  , 4v1  11v2  8v3
 
1 0 1
1 0 0 
2) (a) 0 2 0  ,( 1,10,6)
 
0 0 3 

1 0 0 
(b)
0 0 0  ,( 1,0,0)
 
0 0 0 
 2 2 1 
3)   ,(7,3)
 1 3 2 
0 0 0

4) 4 0 0 ,6 x  2

 
 0 1 0 
0 1 1 

5) (a) 0 1 1

 
0 0 0 
0 1 
(b) 1 0 
 
0 1 

2 0 2 2
6)   , 0 1 
 1 1   
8) No
1. Solve the system of equations by using the method
of Gauss-Jordan elimination method.
2𝑥1 + 2𝑥2 − 4𝑥3 = 14
3𝑥1 + 𝑥2 + 𝑥3 = 8
2𝑥1 − 𝑥2 + 2𝑥3 = −1

Solution:
2 2 −4 14
Augmented matrix is 3 1 1 8
2 −1 2 −1
1 1 −2 7
1
𝑅1 ~ 3 1 1 8
2
2 −1 2 −1
1 1 −2 7
𝑅2 − 3𝑅1 , 𝑅3 − 2𝑅1 ~ 0 −2 7 −13
0 −3 6 −15
1 1 −2 7
1 7 13
− 𝑅2 ~ 0 1 −
2
2 2
0 −3 6 −15
3 1
1 0
2 2
7 13
𝑅1 − 𝑅2 , 𝑅3 + 3𝑅2 ~ 0 1 −
2 2
9 9
0 0 −
2 2
3 1
1 0
2 2 2
− 𝑅3 ~ 0 1 −
7 13
9
2 2
0 0 1 −1
1 0 0 2
7 3
𝑅2 + 𝑅3 , 𝑅1 − 𝑅3 ~ 0 1 0 3
2 2
0 0 1 −1
This is the reduced echelon form.
∴ The solution is 𝑥1 = 2, 𝑥2 = 3, 𝑥3 = −1

2. Solve, if possible, the system of equations


𝑥1 + 2𝑥2 − 𝑥3 − 𝑥4 = 0
𝑥1 + 2𝑥2 + 𝑥4 = 4
−𝑥1 − 2𝑥2 + 2𝑥3 + 4𝑥4 = 5
Solution:
1 2 −1 −1 0
Augmented matrix is 1 2 0 1 4
−1 −2 2 4 5

1 2 −1 −1 0
𝑅2 − 𝑅1 , 𝑅3 + 𝑅1 ~ 0 0 1 2 4
0 0 1 3 5

1 2 0 1 4
𝑅1 + 𝑅2 , 𝑅3 − 𝑅2 ~ 0 0 1 2 4
0 0 0 1 1

1 2 0 0 3
𝑅1 − 𝑅3 , 𝑅2 − 2𝑅1 ~ 0 0 1 0 2
0 0 0 1 1
This is the reduced echelon form.
The corresponding system of equation is
𝑥1 + 2𝑥2 = 3, 𝑥3 = 2, 𝑥4 = 1
There are many solutions
𝑥1 = 3 − 2𝑥2 , 𝑥3 = 2, 𝑥4 = 1
Let 𝑥2 = 𝑘𝜖ℝ
The general solution is
𝑥1 = 3 − 2𝑘, 𝑥2 = 𝑘, 𝑥3 = 2, 𝑥4 = 1

3. Solve, if possible the system of equations


𝑥1 − 𝑥2 − 2𝑥3 =7

2𝑥1 − 2𝑥2 + 2𝑥3 − 4𝑥4 = 12


−𝑥1 + 𝑥2 − 𝑥3 + 2𝑥4 = −4
−3𝑥1 + 𝑥2 − 8𝑥3 − 10𝑥4 = −29

Solution:
1 −1 −2 0 7
2 −2 2 −4 12
Augmented matrix is
−1 1 −1 2 −4
−3 1 −8 −10 −29

1 −1 −2 0 7
0 0 6 −4 −2
𝑅2 − 2𝑅1 , 𝑅3 + 𝑅1 , 𝑅4 + 3𝑅1 ~
0 0 −3 2 3
0 −2 −14 −10 −8

1 −1 −2 0 7
0 −2 14 10 −8
𝑅2 ↔ 𝑅4 , ~
0 0 −3 2 3
0 0 6 −4 −2

1 −1 −2 0 7
1 0 1 7 5 4
− 𝑅2 ~
2 0 0 −3 2 3
0 0 6 −4 −2
1 0 5 5 11
0 1 7 5 4
𝑅1 + 𝑅2 ~
0 0 −3 2 3
0 0 6 −4 −2
1 0 5 5 11
1 0 1 7 5 4
− 𝑅3 ~ 2
3 0 0 1 − 3 −1
0 0 6 −4 −2
25
1 0 0 3 16
29
0 1 0 11
𝑅1 − 5𝑅3 , 𝑅2 − 7𝑅3 , 𝑅4 − 6𝑅3 ~ 3
0 0 1 2 −1
0 0 0 −3 4
0
1 0 0 25/3 16
1 0 1 0 29/3 11
𝑅4 ~
4 0 0 1 −2/3 −1
0 0 0 0 1
1 0 0 25/3 0
0 1 0 29/3 0
𝑅1 − 16𝑅4 , 𝑅2 − 11𝑅1 , 𝑅3 + 𝑅4 ~
0 0 1 −2/3 0
0 0 0 0 1

This matrix is still not reduced echelon form.


∴ The system of equations has no solution.

4. Is the following sets subspace of ℝ3 varify?


a. 𝑊1 = 𝑎, 𝑎 2 , 𝑏 𝑎 , 𝑏 ∈ ℝ
b. 𝑊2 = 𝑎 + 1, 𝑎, 𝑎 + 2 𝑎 ∈ ℝ
c. 𝑊3 = 𝑎, 𝑏, 𝑐 𝜖ℝ3 𝑎 + 2𝑏 + 𝑐 = 0
Solution:
a. Let 𝑎, 𝑎 2 , 𝑏 and 𝑐, 𝑐 2 , 𝑑 be any two elements of 𝑊1 .
We get
𝑎, 𝑎 2 , 𝑏 + 𝑐, 𝑐 2 , 𝑑 = 𝑎 + 𝑐, 𝑎 2 + 𝑐 2 , 𝑏 + 𝑑
≠ 𝑎 + 𝑐, 𝑎 + 𝑐 2 , 𝑏 + 𝑑
∴ 𝑎, 𝑎 2 , 𝑏 + 𝑐, 𝑐 2 , 𝑑 ∉ 𝑤1
𝑊1 is not closed under addition
∴ 𝑊1 is not a subspace of ℝ3 .

b. Let 𝑎 + 1, 𝑎, 𝑎 + 2 , 𝑏 + 1, 𝑏, 𝑏 + 2 𝜖𝑊2

We get 𝑎 + 1, 𝑎, 𝑎 + 2 + 𝑏 + 1, 𝑏, 𝑏 + 2
= 𝑎 + 𝑏 + 2, 𝑎 + 𝑏, 𝑎 + 𝑏 + 4
≠ 𝑎 + 𝑏 + 1, 𝑎 + 𝑏, 𝑎 + 𝑏 + 2
𝑊2 is not closed under addition
∴ 𝑊2 is not a subspace of ℝ3

2nd method: If 𝑊2 is a subspace ⇒ (0, 0, 0)𝜖𝑊2

Is there a value of a for which 𝑎 + 1, 𝑎, 𝑎 + 2 is 0, 0, 0 ?


𝑎 + 1, 𝑎, 𝑎 + 2 = (0, 0, 0)

Equating corresponding components,

we get 𝑎 = 0, 𝑎 + 1 = 0 and 𝑎 + 2 = 0.
This system of equations has no solution

∴ 𝑊2 is not a subspace of ℝ3 .

c. Let 𝑎1 , 𝑏1 , 𝑐1 , 𝑎2 , 𝑏2 , 𝑐2 𝜖𝑊3
⇒ 𝑎1 + 2𝑏1 + 𝑐1 = 0, 𝑎2 + 2𝑏2 + 𝑐2 = 0
𝑎1 , 𝑏1 , 𝑐1 + 𝑎2 , 𝑏2 , 𝑐2 = 𝑎1 + 𝑎2 , 𝑏1 + 𝑏2 , 𝑐1 + 𝑐2
= 𝑎1 + 𝑎2 + 2 𝑏1 + 𝑏2 + 𝑐1 + 𝑐2
= 𝑎1 + 2𝑏2 + 𝑐3 + 𝑎1 + 2𝑏2 + 𝑐3
= 0.
⇒ 𝑎1 , 𝑏1 , 𝑐1 + 𝑎2 , 𝑏2 , 𝑐2 𝜖𝑊3
𝑊3 is closed under addition.

Let 𝑐𝜖ℝ

𝑐 𝑎1 , 𝑏1 , 𝑐1 = 𝑐𝑎1 , 𝑐 𝑏1 , 𝑐 𝑐1

= 𝑐𝑎1 + 2 𝑐𝑏1 + 𝑐𝑐1

= 𝑐 𝑎1 + 2 𝑏1 + 𝑐1

=0
⇒ 𝑐 𝑎1 , 𝑏1 , 𝑐1 𝜖𝑊2

𝑊3 is closed under scalar multiplication.

∴ 𝑊3 is a subspace of ℝ3

5. Let 𝑈 be the vector space generated by the functions


𝑓 𝑥 = 2𝑥 − 7 and 𝑔 𝑥 = 𝑥 2 − 3𝑥 + 5. Show that the
function 𝑕 𝑥 = 3𝑥 2 − 5𝑥 + 1 lies in 𝑈.

Solution:

𝑕 will be in the space generated by 𝑓 and 𝑔, if ∃


scalars 𝑎 and 𝑏 such that

𝑎 2𝑥 − 7 + 𝑏 𝑥 2 − 3𝑥 + 5 = 3𝑥 2 − 5𝑥 + 1

⇒ 𝑏𝑥 2 + 2𝑎 − 3𝑏 𝑥 − 7𝑎 + 5𝑏 = 3𝑥 2 − 5𝑥 + 1
Equating corresponding coefficients

𝑏 = 3, 2𝑎 − 3𝑏 = −5, −7𝑎 + 5𝑏 = 1
𝑏 = 3 ⇒ 2𝑎 = −5 + 9 ⇒ 𝑎 = 2
∴ 2 2𝑥 − 7 + 3 𝑥 2 − 3𝑥 + 5 = 3𝑥 2 − 5𝑥 + 1

∴ The function 𝑕(𝑥) lies in the space generated by 𝑓(𝑥)


and 𝑔(𝑥)

6. Let 𝑈 be the subspace of ℝ3 generated by the vectors


(3, −1, 2) and(1, 0, 4). Let 𝑉 be the subspace of ℝ3
generated by the vectors (4, −1, 6) and 1, −1, −6 .
Show that 𝑈 = 𝑉.
Solution:

Let 𝑢 𝜖 𝑈. Let us show that 𝑢 𝜖 𝑉

If 𝑢 𝜖 𝑈 ∃ scalars 𝑎 and 𝑏 such that


𝑢 = 𝑎 3, −1, 2 + 𝑏 1, 0, 4 = 3𝑎 + 𝑏, −𝑎, 2𝑎 + 4𝑏 …. (1)

Let us see if we can write 𝑢 as a linear combination of


4, −1, 6 and 1, −1, −6 .

𝑢 = 𝑝 4, −1, 6 + 𝑞 1, −1, −6 = 4𝑝 + 𝑞, −𝑝 − 𝑞, 6𝑝 − 6𝑞 . .. (2)


From (1) and (2)
⇒ 4𝑝 + 𝑞 = 3𝑎 + 𝑏, −𝑝 − 𝑞 = −𝑎, 6𝑝 − 6𝑞 = 2𝑎 + 4𝑏
Solving this system, we get
2𝑎+𝑏 𝑎−𝑏
𝑝= and 𝑞 =
3 3

2𝑎 + 𝑏 𝑎 − 𝑏(1, −1, −6)


∴ 𝑢= 4, −1, 6 +
3 3
⇒ 𝑢𝜖𝑉
Conversely, let 𝑣 𝜖 𝑉
𝑣 = 𝑐 4, −1, 6 + 𝑑 1, −1, −6
= 4𝑐 + 𝑑, −𝑐 − 𝑑, 6𝑐 − 6𝑑 … … (3)
Similarly if we can write 𝑣 as linearly combination of
(3, -1, 2) and (1, 2, 0)

𝑣 = 𝑟 3, −1, 2 + 𝑠 1, 0, 4 = 3𝑟 + 𝑠, −𝑟, 2𝑟 + 4𝑠 … … (4)

Form (3) and (4)


⇒ 3𝑟 + 𝑠 = 4𝑐 + 𝑑
−𝑟 = −𝑐 − 𝑑
2𝑟 + 4𝑠 = 6𝑐 − 6𝑑

Solving this system ⇒ 𝑟 = 𝑐 + 𝑑 and 𝑠 = 𝑐 − 2𝑑


∴ 𝑣 = 𝑐 + 𝑑 3, −1, 2 + 𝑐 − 2𝑑 1, 0, 4

⇒ 𝑣𝜖𝑈
∴𝑈=𝑉

7. Let 𝑣1 , 𝑣2 𝑎𝑛𝑑 𝑣3 be vectors in a vector space 𝑉. Let 𝑣1


be a linear combination of 𝑣2 and 𝑣3 . If 𝑐1 and 𝑐2 are
non zero scalars, show that 𝑣1 is also a linear
combination of 𝑐1 𝑣2 and 𝑐2 𝑣3 .
Solution:

𝑉 be a vector space.
𝑣1 be a linear combination of 𝑣1 and 𝑣2 ∃ scalars 𝑎 and
𝑏 such that 𝑣1 = 𝑎𝑣2 + 𝑏𝑣3
𝑐1 and 𝑐2 are two non-zero scalars.
We have to show 𝑣1 is also a linear combination of
𝑐1 𝑣2 and 𝑐2 𝑣3 .
⇒ 𝑣1 = 𝑎𝑣2 + 𝑏𝑣2
𝑎 𝑏
We can write 𝑣1 = 𝑐1 𝑣2 + (𝑐2 𝑣3 )
𝑐1 𝑐2
𝑎 𝑏
Here and are scalars
𝑐1 𝑐2
∴ 𝑣1 is a linear combination of 𝑐1 𝑣2 and 𝑐2 𝑣3 .

8. Find the vectors 2, 1, 0 , −1, 3, 1 and (4, 5, 0) span ℝ3 ?

Solution:

Let 𝑥, 𝑦, 𝑧 𝜖ℝ3
⇒ 𝑥, 𝑦, 𝑧 = 𝑎 2, 1, 0 + 𝑏 −1, 3, 1 + 𝑐 4, 5, 0
= (2𝑎 − 𝑏 + 4𝑐, 𝑎 + 3𝑏 + 5𝑐, 𝑏)
⇒ 2𝑎 − 𝑏 + 4𝑐 = 𝑥
𝑎 + 3𝑏 + 5𝑐 = 𝑦
𝑏=𝑧
Solving this system of equations, we get
5𝑥 − 4𝑦 + 17𝑧 1
𝑎= , 𝑏 = 𝑧 and 𝑐 = 2𝑦 − 𝑥 − 17𝑧
6 6
5𝑥 − 4𝑦 + 17𝑧
∴ (𝑥, 𝑦, 𝑧) = 2, 1, 0 + 𝑧 −1, 3, 1
6
1
+ 2𝑦 − 𝑥 − 17𝑧 (4, 5, 0)
6
Therefore ℝ3 is spanned by the vectors
2, 1, 0 , −1, 3, 1 and (4, 5, 0).
9. For what values of ′𝑡′ the set { 2, −𝑡 , 2𝑡 + 6,4𝑡 } is
linearly dependent.

Solution:
By definition { 2, −𝑡 , 2𝑡 + 6,4𝑡 } is linearly dependent if
∃ scalars 𝑎, 𝑏 both are not zero such that
𝑎 2, −𝑡 + 𝑏 2𝑡 + 6,4𝑡 } = 0

2𝑎 + 𝑏 2𝑡 + 6 , −𝑎𝑡 + 𝑏4𝑡 = 0
Equating corresponding elements

⇒ 2𝑎 + 2𝑏𝑡 + 6𝑏 = 0, − 𝑎𝑡 + 4𝑏𝑡 = 0

⇒ 2 𝑎 + 3𝑏 + 𝑏𝑡 = 0, 4𝑏 − 𝑎 𝑡 = 0
⇒ 𝑎 + 3𝑏 + 𝑏𝑡 = 0, 𝑡 = 0(∵ 4𝑏 − 𝑎 ≠ 𝑜)

⇒ 𝑎 = 4𝑏

∴ 4𝑏 + 3𝑏 + 𝑏𝑡 = 0 ⇒ 𝑏 7 + 𝑡 = 0

⇒7+𝑡 =0 (∵ 𝑏 ≠ 0)
⇒ 𝑡 = −7

∴ For 𝑡 = 0, −7 the given set is linearly dependent.

10. Let the set {𝑣1 , 𝑣2 , 𝑣3 } be linearly dependent in a


vector space 𝑉. Prove that the set {𝑣1 , 𝑣1 + 𝑣2 , 𝑣3 } is
also linearly dependent.

Solution:
{𝑣1 , 𝑣2 , 𝑣3 } is a linearly dependent set.
∃ 𝑐1 , 𝑐2 , 𝑐3 not all zero such that
𝑐1 𝑣1 + 𝑐2 𝑣2 + 𝑐3 𝑣3 = 0
Consider 𝑎𝑣1 + 𝑏 𝑣1 + 𝑣2 + 𝑐𝑣3 = 0
𝑎𝑣1 + 𝑏𝑣2 + 𝑐𝑣3 + 𝑏𝑣1 = 0

{𝑣1 , 𝑣2 , 𝑣3 } is a linearly dependent set.


∴ 𝑎, 𝑏, 𝑐 are not all zero.
∴ {𝑣1 , 𝑣1 + 𝑣2 , 𝑣3 } also a linearly dependent set.
Similarly if {𝑣1 , 𝑣2 , 𝑣3 } is linearly independent. Then the
set {𝑣1 , 𝑣1 + 𝑣2 , 𝑣3 } is also a linearly independent.

11. Let {𝑣1 , 𝑣2 } be linearly independent in a vector space


𝑉. Show that if a vector 𝑣3 is not of the form 𝑎𝑣1 +
𝑏𝑣2, then the set {𝑣1 , 𝑣2 , 𝑣3 } is linearly independent.

Solution:

{𝑣1 , 𝑣2 } be linearly independent in a vector space 𝑉.


Assume 𝑣3 is of the form 𝑎𝑣1 + 𝑏𝑣2 .
Consider 𝑐1 𝑣1 + 𝑐2 𝑣2 + 𝑐3 𝑣3 = 0
⇒ 𝑐1 𝑣1 + 𝑐2 𝑣2 + 𝑐3 𝑎𝑣1 + 𝑏𝑣2 = 0
⇒ 𝑐1 + 𝑐3 𝑎 𝑣1 + 𝑐2 + 𝑐3 𝑏 𝑣2 = 0
𝑣1 , 𝑣2 is a linearly independent set.
∴ 𝑐1 + 𝑐3 𝑎 = 0 and 𝑐2 + 𝑐3 𝑏 = 0.
The system has non zero solution.
∴ {𝑣1 , 𝑣2 , 𝑣3 } is not a linearly independent.
∴ 𝑣3 is not of the form 𝑎𝑣1 + 𝑏𝑣2 .

12. Let 𝑣1 , 𝑣2 be a basis for a vector space 𝑉. Show


that the set of vectors 𝑢1 , 𝑢2 , where 𝑢1 = 𝑣1 +
𝑣2 , 𝑢2 = 𝑣1 − 𝑣2 , is also a basis for 𝑉.
Solution:
𝑣1 , 𝑣2 is a basis for a vector space 𝑉.

∴ {𝑣1 , 𝑣2 } is a linearly independent and spans 𝑉.

Claim: {𝑢1 , 𝑢2 } is a basis for 𝑉.

We have to prove {𝑢1 , 𝑢2 } is a linearly independent set


and spans 𝑉.

Here 𝑢1 = 𝑣1 + 𝑣2 , 𝑢2 = 𝑣1 − 𝑣2 .

Consider 𝑎𝑢1 + 𝑏𝑢2 = 0


𝑎 𝑣1 + 𝑣2 + 𝑏 𝑣1 − 𝑣2 = 0

⇒ 𝑎 + 𝑏 𝑣1 + 𝑎 − 𝑏 𝑣2 = 0

But 𝑣1 , 𝑣2 is a linearly independent


⇒ 𝑎 + 𝑏 = 0, 𝑎−𝑏 =0
⇒ 𝑎 = 0 𝑎𝑛𝑑 𝑏 = 0

∴ {𝑢1 , 𝑢2 } is a linearly independent.

Let (𝑥, 𝑦)𝜖𝑉.


Then 𝑥, 𝑦 = 𝑐1 𝑢1 + 𝑐2 𝑢2
⇒ 𝑥, 𝑦 = 𝑐1 𝑣1 + 𝑣2 + 𝑐2 (𝑣1 − 𝑣2 )
⇒ 𝑥, 𝑦 = 𝑐1 + 𝑐2 𝑣1 + 𝑐1 − 𝑐2 𝑣2

Since 𝑥, 𝑦 can be uniquely expressed as a linear


combination of 𝑣1 , 𝑣2 , 𝑐1 + 𝑐2 and 𝑐1 − 𝑐2 are valid i.e.,
𝑐1 and 𝑐2 are valid and hence {𝑢1 , 𝑢2 } spans 𝑉.

∴ {𝑢1 , 𝑢2 } is a basis for 𝑉.


13. Is the set { 1, −1, 2 , 2, 0, 1 , 3, 0, 0 } basis for ℝ3 ?

Solution:

We have to verify that the set is linearly independent


and spans ℝ3
Consider 𝑐1 1, −1, 2 + 𝑐2 2, 0, 1 + 𝑐3 3, 0, 0 = 0
𝑐1 + 2𝑐2 + 3𝑐3 = 0, −𝑐1 = 0, 2𝑐1 + 𝑐2 = 0
⇒ 𝑐1 = 0 = 𝑐2 = 𝑐3
∴ Given set is linearly independent.

Let (𝑥, 𝑦, 𝑧)𝜖ℝ3 . We try to find scalars 𝑎, 𝑏, 𝑐 such that


𝑥, 𝑦, 𝑧 = 𝑎 1, −1, 2 + 𝑏 2, 0, 1 + 𝑐(3, 0, 0)
⇒ 𝑥, 𝑦, 𝑧 = (𝑎 + 2𝑏 + 3𝑐, −𝑎, 2𝑎 + 𝑏)

Equating corresponding elements we get,

𝑎 + 2𝑏 + 3𝑐 = 𝑥, −𝑎 = 𝑦, 2𝑎 + 𝑏 = 𝑧
𝑥 + 5𝑦 − 2𝑧
⇒ 𝑎 = −𝑦, 𝑏 = 𝑧 + 2𝑦, 𝑐=
3
∴ The set spans ℝ3

∴ { 1, −1, 2 , 2, 0, 1 , 3, 0, 0 } is a basis for ℝ3

14. Determine a basis for 𝑊 = { 𝑎, 𝑏, 𝑐 𝜖ℝ3 2𝑎 + 𝑏 + 3𝑐 =


0} subspace of ℝ3 and find the dimension?
Solution:
Given 𝑊 = { 𝑎, 𝑏, 𝑐 𝜖ℝ3 2𝑎 + 𝑏 + 3𝑐 = 0}
= {𝑎, −3𝑐 − 2𝑎, 𝑐 𝑎 , 𝑐 𝜖ℝ}
= {𝑎 1, −2, 0 + 𝑐 0, −3, 1 𝑎 , 𝑐 𝜖ℝ}

i.e 𝑎, 𝑏, 𝑐 = 𝑎 1, −2, 0 + 𝑐(0, −3, 1)


{ 1, −2, 0 , 0, −3, 1 } is a linearly independent set and
is a spans ℝ3
∴ 1, −2, 0 , 0, −3, 0 is form a basis for ℝ3
and dimW = 2.

15. Find 𝑇(𝑥, 𝑦) where 𝑇: ℝ2 → ℝ2 is defined by


𝑇 2, 3 = (4, 5) and 𝑇 1, 0 = (0, 0).
Solution:

First of all we show that the vectors (2, 3) and (1, 0)


are linearly independent.
Let 𝑎 2, 3 + 𝑏 1, 0 = 0 ⇒ 2𝑎 + 𝑏, 3𝑎 = 0
2𝑎 + 𝑏 = 0, 3𝑎 = 0 ⇒ 𝑎 = 0, 𝑏=0
∴ 2, 3 , 1, 0 is a linearly independent set.

Let (𝑥, 𝑦)𝜖ℝ2 and 𝑥, 𝑦 = 𝑎 2, 3 + 𝑏(1, 0)


= (2𝑎 + 𝑏, 3𝑎)
𝑦 3𝑥 − 2𝑦
⇒ 2𝑎 + 𝑏 = 𝑥, 3𝑎 = 𝑦 ⇒ 𝑎 = ;𝑏 =
3 3
𝑦 2𝑥 − 3𝑦
⇒ 𝑥, 𝑦 = 2, 3 + (1, 0)
3 3
Take both sides linear transformation T
𝑦 3𝑥 − 2𝑦
⇒ 𝑇 𝑥, 𝑦 = 𝑇 2, 3 + (1, 0)
3 3
𝑦 3𝑥 − 2𝑦
= 𝑇 2, 3 + 𝑇(1, 0)
3 3
𝑦 3𝑥−2𝑦 4𝑦 5𝑦
= 4, 5 + (0, 0) = , .
3 3 3 3

16. Let 𝑇: 𝑈 → 𝑉 be a linear transformation. Prove that


dim range 𝑇 = dimdomoin(𝑇) if and only if 𝑇 is one-to-
one.

Solution:

𝑇 is one-to-one ⇔ ker 𝑇 = 0
⇔ dimker 𝑇 = 0

⇔ dimrange 𝑇 + dimker 𝑇 = dimdomoin(𝑇)


⇔ dimrange 𝑇 = dimdomoin(𝑇)

17. Determine the kernel and range of 𝑇: ℝ2 → ℝ3 is


defined by 𝑇 𝑥, 𝑦 = 𝑥 + 𝑦, 𝑥 − 𝑦, 𝑦 . Show that
dimker 𝑇 + dimrange 𝑇 = dimdomoin 𝑇 .

Solution:

Let (𝑥, 𝑦)𝜖ℝ2


𝑥, 𝑦 𝜖 ker 𝑇 ⇒ 𝑇 𝑥, 𝑦 = 0
⇒ 𝑥 + 𝑦, 𝑥 − 𝑦, 𝑦 = (0, 0, 0)
⇒ 𝑥 + 𝑦 = 0, 𝑥 − 𝑦 = 0, 𝑦=0
⇒ 𝑥 = 0, 𝑦 = 0
∴ ker 𝑇 = {0}
∴ dimker 𝑇 = 0
And Range 𝑇 = { 𝑥, 𝑦, 𝑧 𝜖ℝ3 : 𝑇 𝑥, 𝑦 = 𝑥, 𝑦, 𝑧 ∀(𝑥, 𝑦)𝜖ℝ2 }
∴ The range space consists of all vector of the type
𝑥 + 𝑦, 𝑥 − 𝑦, 𝑦 ∀(𝑥, 𝑦)𝜖ℝ2

Range 𝑇 = {𝑥 1, 1, 0 + 𝑦 1, −1, 1 𝑥 , 𝑦𝜖ℝ}


∴ dimrange 𝑇 = 2

∴ dimker 𝑇 + dimrange 𝑇 = 0 + 2

= 2 = dimdomoin(𝑇)

18. Determine the kernel and the range of the


transformation 𝑇: ℝ3 → ℝ3 defined by 𝑇 𝑋 = 𝐴𝑋 where
1 2 3
𝐴 = 0 −1 1 and 𝑋 is a 3 × 1 column matrix.
1 1 4
Solution:

Kernel: Let 𝑋 = (𝑥1 , 𝑥2 , 𝑥3 )𝑇 𝜖ℝ3


𝑋𝜖 ker 𝑇 ⇒ 𝑇 𝑋 = 0
1 2 3 𝑥1 0
⇒ 0 −1 1 𝑥2 = 0
1 1 4 𝑥3 0
⇒ 𝑥1 + 2𝑥2 + 3𝑥3 = 0
−𝑥2 + 𝑥3 = 0
𝑥1 + 𝑥2 + 4𝑥3 = 0
Solving this system, we get solutions
𝑥1 = −5𝑟, 𝑥2 = 𝑟, 𝑥3 = 𝑟
Then kernel is of the form −5𝑟, 𝑟, 𝑟
∴ ker 𝑇 = {𝑟 −5, 1, 1 𝑟𝜖ℝ}
Range: The range is spanned by column vectors of A.
Write these column vectors as rows of a matrix and
compute an echelon form of the matrix. The non-
zero row vectors will give a basis for the range,
we get
1 0 1 1 0 1 1 0 1 1 0 1
2 −1 1 ≃ 0 −1 −1 ≃ 0 1 1 ≃ 0 1 1
3 1 4 0 1 1 0 1 1 0 0 0
The vectors (1, 0, 1) and (0, 1, 1) span the range of 𝑇.

∴ Range 𝑇 = {𝑠 1, 0, 1 + 𝑟 0, 1, 1 𝑠 , 𝑟𝜖ℝ}.

19. Let 𝑇: 𝑈 → 𝑉 be a linear transformation. Let 𝑇 be


defined relative to bases {𝑢1 , 𝑢2 , 𝑢3 } and {𝑣1 , 𝑣2 , 𝑣3 } of 𝑈
and 𝑉 as follows: 𝑇 𝑢1 = 𝑣1 + 𝑣2 + 𝑣3
𝑇 𝑢2 = 3𝑣1 − 2𝑣2
𝑇 𝑢3 = 𝑣1 + 2𝑣2 − 𝑣3

Find the matrix of 𝑇 with respect to these bases use this


matrix to find the image of the vector 𝑢 = 3𝑢1 + 2𝑢2 − 5𝑢3
Solution:

The coordinate vectors of 𝑇 𝑢1 , 𝑇(𝑢2 ) and 𝑇(𝑢3 ) are


1 3 1
1 , −2 𝑎𝑛𝑑 2
1 0 −1
The vectors make up the columns of the matrix of 𝑇
1 3 1
Let 𝐴 = 1 −2 2
1 0 −1
Let us now find the image of the vector 𝑢 = 3𝑢1 + 2𝑢2 −
5𝑢3 using this matrix. The coordinate vector of 𝑢 is
3
2 = 𝑋(𝑠𝑎𝑦)
−5
1 3 1 3 4
𝐴𝑋 = 1 −2 2 2 = −11
1 0 −1 −5 8
4
𝑇(𝑢) has a coordinate vector −11
8
∴ 𝑇 𝑢 = 4𝑣1 − 11𝑣2 + 8𝑣3 is the image of 𝑢 = 3𝑢1 + 2𝑢2 −
5𝑢3 .

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy