Lec 7
Lec 7
Module 2
Vector spaces
Lecture 7
In various branches of Physics and Mathematics, one restricts to a set of numbers or
physical quantities, where it is meaningful and interesting to talk about ‘linear
combinations’ of these objects in the set. Simple examples of such combinations have
already been seen for vectors in three dimensional space, and in particular with linear
combination of such vectors.
As we shall see later that a linear vector space is defined formally through a
number of properties. The word ‘space’ suggests something geometrical, as does the
word ‘vector’ to most readers. As we get into the chapter on linear vector spaces, the
reader will comprehend and come to terms with the fact that much of the terminology has
geometrical connotations.
Consider a vector 𝑟⃗ = 𝑥𝑖̂ + 𝑦𝑗̂ + 𝑧𝑘̂. The vector starts from origin (0, 0, 0) and ends at a
point (𝑥, 𝑦, 𝑧). Thus there is a one-to-one correspondence between the vectors and the
variables (𝑥, 𝑦, 𝑧). The collection of all such points or all such vectors makes up the three
dimensional space, 𝑟, 𝑣, 𝑒 often called as 𝑅3 (𝑟 ∶ 𝑟𝑒𝑎𝑙) or 𝑉3 (𝑣: 𝑣𝑒𝑐𝑡𝑜𝑟) or 𝐸3
(E: Euclidiean; after the Greek Mathematician named Euclid). Similarly a set of two
dimensional vector will constitute a 𝑅2 space. Thus an n-dimensional vector is defined on
a 𝑅𝑛 space. For example, 4-vectors of special relativity are ordered sets of four numbers
or variables and we call space-time is 4-dimensional.
Before going ahead with the discussion on details of the vector space, it is necessary to
define the ‘dimension’ of the vector space and to understand the dimension of the space.
It is also necessary to understand the linear independence of vectors. Formally, the linear
independence is defined as in the following. For a vector space V with a subset S, if there
exists a set of vectors 𝛼1 , 𝛼2 , … … … . 𝛼𝑛 in S with some coefficients 𝐶1 , 𝐶2 , … … . 𝐶𝑛 (at
least one of them not being zero), such that
𝐶1 𝛼1 + 𝐶2 𝛼2 + … … . . 𝐶𝑛 𝛼𝑛 = 0 (7.1)
If any of the 𝛼𝑖 has a relation with any of the 𝛼𝑗 (𝑖 ≠ 𝑗), then the linear independence is
no longer there.
Suppose we have two linearly independent vectors. They define a plane. All linear
combination of these two vectors lie in the plane. Since all the vectors making up this
Either the original vectors in 𝑅3 or the independent ones in 𝑅2 span the space 𝑅2 .
Thus the definition of spanning is as follows - a set of vectors spans a space if all the
vectors in space can be written as linear combination of the spanning set. A set of linearly
independent vectors which span a vector space is called a basis.
The dimension of a vector space is equal to the number of basis vectors. It does
not matter how one chooses the basis vectors for a given vector space corresponding to a
particular problem, the number of basis vectors will always be the same. This number
denotes the dimension of the space. In 3-dimensions, one frequently uses the unit vectors
𝑖̂ , 𝑗̂, 𝑘̂ , to describe a particular coordinate system called as Cartesian coordinate system.
At the outset, let us define the vector fields. Let us define a function
𝐹(𝑥, 𝑦) = (𝑦, −𝑥) in a two dimensional space (𝑅2 ). We can calculate the values of the
function at a set of discrete points as,
𝐹(0,1) = (1,0)
Also we can plot the value of 𝐹(𝑥, 𝑦) as a vector with tail anchored at some arbitrary
(𝑥, 𝑦) such as given below,
Thus by putting each of the above vectors defined by 𝐹(𝑥, 𝑦) = (𝑦, −𝑥) and many more
of them we can see the structure of the vector field, which appears as arrow heads
rotating in a clockwise sense. This is the vector field corresponding to 𝐹(𝑥, 𝑦).
The definition of a vector space 𝑉 , whose elements are called vectors involves an
arbitrary field K whose elements are all scalar. Thus to make notations clear,
Vector space – 𝑉
Elements of 𝑉 − 𝑢1 , 𝑣, 𝑤
Arbitrary field –𝐾
Elements of 𝐾 – 𝑎, 𝑏, 𝑐
From the above discussion, we wish to say that any set of objects obeying the same
properties form a vector space.
(A) A definite rule for forming the vector sum, that is |𝑣 > + |𝑤 >.
(i) The result of these operations is another element in the same space.
This property is called closure.
|𝑣 > + |𝑤 >∈ 𝑉.
(ii) Scalar multiplication is distributive :
𝛼(|𝑣 > +|𝑤 >) = 𝛼|𝑣 > +𝛼|𝑤 >
(iii) Scalar multiplication is distributive :
(𝛼 + 𝛽)|𝑣 > = 𝛼|𝑣 > +𝛽|𝑣 >
(iv) Scalar multiplication is associative :
𝛼(𝛽|𝑣 >) = 𝛼𝛽|𝑣 >
(v) Addition is commutative :
|𝑣 > +|𝑤 > =|𝑤 > +|𝑣 >
(vi) Addition is associative:
|𝑤 > +(|𝑣 > +|𝑧 >) = (|𝑤 + |𝑣 >) + |𝑧 >
(vii) There exists a null vector |0 > obeying
(viii) For every vector |𝑣 > there exists an inverse |– 𝑣 >, such that if
added with the origin it yields zero,
|𝑣 > +|−𝑣 >= 0. For example, a set of arrows with only positive
z-components do not form a vector space as there is no inverse.
Consider the set of all 2× 2 matrices. We know how to add them and multiply them by
scalars. They obey associative, distributive and closure requirements. A null matrix exists
with all the elements as zero. Thus we have a genuine vector space
𝑃(𝑥) = 𝑎0 + 𝑎1 𝑥 + … … … . 𝑎𝑛 𝑥 𝑛
It is important to realize that the elements of a vector space are characterized by two
operations: one in which two elements are added together, and another in which the
elements are multiplied by scalars.
Linear Combination
𝑣 = 𝑎1 𝑢1 + 𝑎2 𝑢2 + … … … 𝑎𝑚 𝑢𝑚
3 1 2 3
[ 7 ] = 𝑥 [2] + 𝑦 [3] + 𝑧 [5]
−4 3 7 6
𝑥 + 2𝑦 + 3𝑧 = 3
2𝑥 + 3𝑦 + 5𝑧 = 7
3𝑥 + 7𝑦 + 6𝑧 = −4
Solving 𝑥 = 2, 𝑦 = − 4 and 𝑧 = 3