Vector

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 8

odern algebra and coding theory.

Vectors and vector spaces


Linear algebra usually starts with the study of vectors, which are
understood as quantities having both magnitude and direction. Vectors lend
themselves readily to physical applications. For example, consider a solid
object that is free to move in any direction. When two forces act at the same
time on this object, they produce a combined effect that is the same as a
single force. To picture this, represent the two forces v and w as arrows; the
direction of each arrow gives the direction of the force, and its length gives
the magnitude of the force. The single force that results from combining v
and w is called their sum, written v + w. In the figure, v + w corresponds to
the diagonal of the parallelogram formed from adjacent sides represented
by v and w.

coordinate vector addition


Coordinate vector additionVectors can be added together by first placing their tails at
the origin of a coordinate system such that their lengths and directions are unchanged.
Then the coordinates of their heads are added pairwise; e.g., in two dimensions, their x-
coordinates and their y-coordinates are added separately to obtain the resulting vector
sum. As shown by the dotted lines, this vector sum coincides with one diagonal of the
parallelogram formed with the original vectors.(more)
Vectors are often expressed using coordinates. For example, in two
dimensions a vector can be defined by a pair of coordinates (a1, a2)
describing an arrow going from the origin (0, 0) to the point (a1, a2). If one
vector is (a1, a2) and another is (b1, b2), then their sum is (a1 + b1, a2 + b2);
this gives the same result as the parallelogram (see the figure). In three
dimensions a vector is expressed using three coordinates (a1, a2, a3), and this
idea extends to any number of dimensions.

Representing vectors as arrows in two or three dimensions is a starting


point, but linear algebra has been applied in contexts where this is no
longer appropriate. For example, in some types of differential equations the
sum of two solutions gives a third solution, and any constant multiple of a
solution is also a solution. In such cases the solutions can be treated as
vectors, and the set of solutions is a vector space in the following sense. In a
vector space any two vectors can be added together to give another vector,
and vectors can be multiplied by numbers to give “shorter” or “longer”
vectors. The numbers are called scalars because in early examples they
were ordinary numbers that altered the scale, or length, of a vector. For
example, if v is a vector and 2 is a scalar, then 2v is a vector in the same
direction as v but twice as long. In many modern applications of linear
algebra, scalars are no longer ordinary real numbers, but the important
thing is that they can be combined among themselves by addition,
subtraction, multiplication, and division. For example, the scalars may
be complex numbers, or they may be elements of a finite field such as the
field having only the two elements 0 and 1, where 1 + 1 = 0. The
coordinates of a vector are scalars, and when these scalars are from the
field of two elements, each coordinate is 0 or 1, so each vector can be
viewed as a particular sequence of 0s and 1s. This is very useful in digital
processing, where such sequences are used to encode and transmit data.

Britannica Quiz

Numbers and Mathematics

Linear transformations and matrices


Vector spaces are one of the two main ingredients of linear algebra, the
other being linear transformations (or “operators” in the parlance of
physicists). Linear transformations are functions that send, or “map,” one
vector to another vector. The simplest example of a linear transformation
sends each vector to c times itself, where c is some constant. Thus, every
vector remains in the same direction, but all lengths are multiplied by c.
Another example is a rotation, which leaves all lengths the same but alters
the directions of the vectors. Linear refers to the fact that
the transformation preserves vector addition and scalar multiplication. This
means that if T is a linear transformation sending a vector v to T(v), then for
any vectors v and w, and any scalar c, the transformation must satisfy the
properties T(v + w) = T(v) + T(w) and T(cv) = cT(v).
‫األول_تحول‬# !‫المحفظة رتبت كل شيء‬
‫األول_تحول‬# !‫المحفظة رتبت كل شيء‬
SPONSORED BY GOVERNMENT EXPENDITURE &...
LEARN MORE

When doing computations, linear transformations are treated as matrices.


A matrix is a rectangular arrangement of scalars, and two matrices can be
added or multiplied as shown in the Click Here to see full-size table

table. The
product of two matrices shows the result of doing one transformation
followed by another (from right to left), and if the transformations are done
in reverse order the result is usually different. Thus, the product of two
matrices depends on the order of multiplication; if S and T are square
matrices (matrices with the same number of rows as columns) of the same
size, then ST and TS are rarely equal. The matrix for a given transformation
is found using coordinates. For example, in two dimensions a linear
transformation T can be completely determined simply by knowing its effect
on any two vectors v and w that have different directions. Their
transformations T(v) and T(w) are given by two coordinates; therefore, only
four coordinates, two for T(v) and two for T(w), are needed to specify T.
These four coordinates are arranged in a 2-by-2 matrix. In three dimensions
three vectors u, v, and w are needed, and to specify T(u), T(v), and T(w) one
needs three coordinates for each. This results in a 3-by-3 matrix.
Get a Britannica Premium subscription and gain access to exclusive
content.Subscribe Now

Eigenvectors
When studying linear transformations, it is extremely useful to find nonzero
vectors whose direction is left unchanged by the transformation. These are
called eigenvectors (also known as characteristic vectors). If v is an
eigenvector for the linear transformation T, then T(v) = λv for some scalar
λ. This scalar is called an eigenvalue. The eigenvalue of greatest absolute
value, along with its associated eigenvector, have special significance for
many physical applications. This is because whatever process is represented
by the linear transformation often acts repeatedly—feeding output from the
last transformation back into another transformation—which results in
every arbitrary (nonzero) vector converging on the eigenvector associated
with the largest eigenvalue, though rescaled by a power of the eigenvalue.
In other words, the long-term behaviour of the system is determined by its
eigenvectors.

Finding the eigenvectors and eigenvalues for a linear transformation is


often done using matrix algebra, first developed in the mid-19th century by
the English mathematician Arthur Cayley. His work formed the foundation
for modern linear algebra.

matrix
mathematics
Print Cite Share Feedback
Also known as: matrix theory
Written and fact-checked by

The Editors of Encyclopaedia Britannica


Last Updated: Article History

Category: Science & Tech


Key People:

Arthur Cayley

Niels Fabian Helge von Koch


Related Topics:

invertible matrix

determinant

square matrix

zero matrix

element
See all related content →
Matrix, a set of numbers arranged in rows and columns so as to form a
rectangular array. The numbers are called the elements, or entries, of the
matrix. Matrices have wide applications in engineering, physics, economics,
and statistics as well as in various branches of mathematics. Matrices also
have important applications in computer graphics, where they have been
used to represent rotations and other transformations of images.

Historically, it was not the matrix but a certain number associated with a
square array of numbers called the determinant that was first recognized.
Only gradually did the idea of the matrix as an algebraic entity emerge. The
term matrix was introduced by the 19th-century English
mathematician James Sylvester, but it was his friend the
mathematician Arthur Cayley who developed the algebraic aspect
of matrices in two papers in the 1850s. Cayley first applied them to the
study of systems of linear equations, where they are still very useful. They
are also important because, as Cayley recognized, certain sets of matrices
form algebraic systems in which many of the ordinary laws
of arithmetic (e.g., the associative and distributive laws) are valid but in
which other laws (e.g., the commutative law) are not valid.

Britannica Quiz

Numbers and Mathematics

If there are m rows and n columns, the matrix is said to be an “m by n”

matrix, written “m × n.” For example,

Sikorsky Helicopter
Getting the Job Done' is a series showcasing the state-of-the-art equipment
and tools that are essential to bringing #Trojena to life. In this episode,
Command Pilots Brad Warren and Colby Martin fly the heavy-duty Helicopter
Express over the mountains of #NEOM to complete challenging tasks key...
SPONSORED BY TROJENA
LEARN MORE

Is a 2 × 3 matrix. A matrix with n rows and n columns is called a square


matrix of order n. An ordinary number can be regarded as a 1 × 1 matrix;
thus, 3 can be thought of as the matrix [3]. A matrix with only one row
and n columns is called a row vector, and a matrix with only one column
and n rows is called a column vector.

In a common notation, a capital letter denotes a matrix, and the


corresponding small letter with a double subscript describes an element of
the matrix. Thus, aij is the element in the ith row and jth column of the
matrix A. If A is the 2 × 3 matrix shown above, then a11 = 1, a12 = 3, a13 =
8, a21 = 2, a22 = −4, and a23 = 5. Under certain conditions, matrices can be
added and multiplied as individual entities, giving rise to important
mathematical systems known as matrix algebras.

Matrices occur naturally in systems of simultaneous equations. In the

following system for the unknowns x and y, the array of numbers

is a matrix whose elements are the coefficients of the unknowns. The


solution of the equations depends entirely on these numbers and on their
particular arrangement. If 3 and 4 were interchanged, the solution would
not be the same.

Get a Britannica Premium subscription and gain access to exclusive


content.Subscribe Now
Two matrices A and B are equal to one another if they possess the same
number of rows and the same number of columns and if aij = bij for
each i and each j. If A and B are two m × n matrices, their sum S = A + B is
the m × n matrix whose elements sij = aij + bij. That is, each element of S is
equal to the sum of the elements in the corresponding positions of A and B.

Sikorsky Helicopter
Getting the Job Done' is a series showcasing the state-of-the-art equipment
and tools that are essential to bringing #Trojena to life. In this episode,
Command Pilots Brad Warren and Colby Martin fly the heavy-duty Helicopter
Express over the mountains of #NEOM to complete challenging tasks key...
SPONSORED BY TROJENA
LEARN MORE

A matrix A can be multiplied by an ordinary number c, which is called


a scalar. The product is denoted by cA or Ac and is the matrix whose
elements are caij.
The multiplication of a matrix A by a matrix B to yield a matrix C is defined
only when the number of columns of the first matrix A equals the number of
rows of the second matrix B. To determine the element cij, which is in the ith
row and jth column of the product, the first element in the ith row of A is
multiplied by the first element in the jth column of B, the second element in
the row by the second element in the column, and so on until the last
element in the row is multiplied by the last element of the column; the sum
of all these products gives the element cij. In symbols, for the case
where A has m columns and B has m rows, The
matrix C has as many rows as A and as many columns as B.

Unlike the multiplication of ordinary numbers a and b, in which ab always


equals ba, the multiplication of matrices A and B is not commutative. It is,
however, associative and distributive over addition. That is, when the
operations are possible, the following equations always hold true: A(BC) =
(AB)C, A(B + C) = AB + AC, and (B + C)A = BA + CA. If the 2 × 2
matrix A whose rows are (2, 3) and (4, 5) is multiplied by itself, then the
product, usually written A2, has rows (16, 21) and (28, 37).

A matrix O with all its elements 0 is called a zero, or null, matrix. A square
matrix A with 1s on the main diagonal (upper left to lower right) and 0s
everywhere else is called an identity, or unit, matrix. It
is denoted by I or In to show that its order is n. If B is any square matrix
and I and O are the unit and zero matrices of the same order, it is always
true that B + O = O + B = B and BI = IB = B. Hence O and I behave like the
0 and 1 of ordinary arithmetic. (In fact, ordinary arithmetic is the special
case of matrix arithmetic in which all matrices are 1 × 1.)

A square matrix A in which the elements aij are nonzero only when i = j is
called a diagonal matrix. Diagonal matrices have the special property that
multiplication of them is commutative; that is, for two diagonal
matrices A and B, AB = BA. The trace of a square matrix is the sum of the
elements on the main diagonal.

Associated with each square matrix A is a number that is known as the


determinant of A, denoted det A. For example, for the 2 × 2 matrix

det A = ad − bc. A square matrix B is called nonsingular if


det B ≠ 0. If B is nonsingular, there is a matrix called the inverse of B,
denoted B−1, such that BB−1 = B−1B = I. The equation AX = B, in
which A and B are known matrices and X is an unknown matrix, can be
solved uniquely if A is a nonsingular matrix, for then A−1 exists and both
sides of the equation can be multiplied on the left by it: A−1(AX) = A−1B.
Now A−1(AX) = (A−1A)X = IX = X; hence the solution is X = A−1B. A system
of m linear equations in n unknowns can always be expressed as a matrix
equation AX = B in which A is the m × n matrix of the coefficients of the
unknowns, X is the n × 1 matrix of the unknowns, and B is the n × 1 matrix
containing the numbers on the right-hand side of the equation.

A problem of great significance in many branches of science is the


following: given a square matrix A of order n, find the n × 1 matrix X, called
an n-dimensional vector, such that AX = cX. Here c is a number called
an eigenvalue, and X is called an eigenvector. The existence of an
eigenvector X with eigenvalue c means that a certain transformation of
space associated with the matrix A stretches space in the direction of the
vector X by the factor c.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy