Matrices Definition
Matrices Definition
Matrices Definition
In mathematics, an anti-diagonal matrix is a matrix where all the entries are zero except those
on the diagonal going from the lower left corner to the upper right corner (↗), known as the anti-
diagonal.
More precisely, an n-by-n matrix A is an anti-diagonal matrix if the (i, j) element is zero for all i, j ∈
{1, …, n} with i + j ≠ n + 1.
Augmented matrix
In linear algebra, the augmented matrix of a matrix is obtained by changing a matrix in some
way.
This is useful when solving systems of linear equations or the augmented matrix may also be
used to find the inverse of a matrix by combining it with the identity matrix.
Band matrix
In mathematics, particularly matrix theory, a band matrix is a sparse matrix, whose non-zero
entries are confined to a diagonal band, comprising the main diagonal and zero or more
diagonals on either side.
Formally, an n×n matrix A=(ai,j ) is a band matrix if all matrix elements are zero outside a
diagonally bordered band whose range is determined by constants k1 and k2:
The quantities k1 and k2 are the left and right half-bandwidth, respectively. The bandwidth of
the matrix is k1 + k2 + 1 (in other words, the smallest number of adjacent diagonals to which
the non-zero elements are confined).
Conjugate transpose
"Adjoint matrix" redirects here. An adjugate matrix is sometimes called a "classical adjoint matrix".
where the subscripts denote the i,j-th entry, for 1 ≤ i ≤ n and 1 ≤ j ≤ m, and the overbar
denotes a scalar complex conjugate. (The complex conjugate of a + bi, where a and b are
reals, isa − bi.)
where denotes the transpose and denotes the matrix with complex
conjugated entries.
Other names for the conjugate transpose of a matrix are Hermitian conjugate,
or transjugate. The conjugate transpose of a matrix A can be denoted by any of these
symbols:
or , commonly used in linear algebra
(sometimes pronounced "A dagger"), universally used in quantum mechanics
, although this symbol is more commonly used for the Moore-Penrose
pseudoinverse
In some contexts, denotes the matrix with complex conjugated entries, and thus
the conjugate transpose is denoted by or .
Example
If
then
Diagonal matrix
In linear algebra, a diagonal matrix is a square matrix in which the entries outside the main
diagonal (↘) are all zero. The diagonal entries themselves may or may not be zero. Thus, the
matrix D = (di,j) with n columns and n rows is diagonal if:
The term diagonal matrix may sometimes refer to a rectangular diagonal matrix,
which is an m-by-n matrix with only the entries of the form di,i possibly non-zero; for
example,
, or
Hermitian matrix
A Hermitian matrix (or self-adjoint matrix) is a square matrix with complex entries which is
equal to its own conjugate transpose – that is, the element in the ith row and jth column is equal
to the complex conjugate of the element in the jth row and ith column, for all indices i and j:
If the conjugate transpose of a matrix is denoted by , then the Hermitian property can
be written concisely as
Examples
For example,
is a Hermitian matrix
Identity matrix
In linear algebra, the identity matrix or unit matrix of size n is the n-by-n square matrix with
ones on the main diagonal and zeros elsewhere. It is denoted by In, or simply by I if the size is
immaterial or can be trivially determined by the context. (In some fields, such as quantum
mechanics, the identity matrix is denoted by a boldface one, 1; otherwise it is identical to I.)
Some mathematics books use U and E to represent the Identity Matrix (meaning "Unit
[1]
Matrix" and "Elementary Matrix", or from the German "Einheitsmatrix", respectively),
although I is considered more universal.
The important property of matrix multiplication of identity matrix is that for m-by-n A
Invertible matrix
In linear algebra, an n-by-n (square) matrix A is
called invertible or nonsingular or nondegenerate if there exists an n-by-n matrix B such that
where In denotes the n-by-n identity matrix and the multiplication used is ordinary matrix
multiplication. If this is the case, then the matrix B is uniquely determined by A and is called
−1
theinverse of A, denoted by A . It follows from the theory of matrices that if
Matrix inversion is the process of finding the matrix B that satisfies the prior
equation for a given invertible matrix A.
Involutory matrix
In mathematics, an involutory matrix is a matrix that is its own inverse. That is, matrix A is an
2
involution iff A = I. One of the three classes of elementary matrix is involutory, namely therow-
interchange elementary matrix. A special case of another class of elementary matrix, that which
represents multiplication of a row or column by −1, is also involutory; it is in fact a trivial example
of a signature matrix, all of which are involutory.
Involutory matrices are all square roots of the identity matrix. This is simply a consequence of the
fact that any nonsingular matrix multiplied by its inverse is the identity. If A is an n × nmatrix,
then A is involutory if and only if ½(A + I) is idempotent.
An involutory matrix which is also symmetric is an orthogonal matrix, and thus represents
an isometry (a linear transformation which preserves Euclidean distance). A reflection matrix is an
example of an involutory matrix.
Irregular matrix
An irregular matrix, or ragged matrix, can be described as a matrix that has a different
number of elements in each row. Ragged matrices are not used in linear algebra, since
standard matrix transformations cannot be performed on them, but they are useful as
arrays in computing. Irregular matrices are typically stored using Iliffe vectors.
Organization of a matrix
This page lists some important classes of matrices used in mathematics, science and
engineering. A matrix (plural matrices, or less commonly matrixes) is a rectangular array
of numbers called entries, as shown at the right. Matrices have a long history of both
study and application, leading to diverse ways of classifying matrices. A first group is
matrices satisfying concrete conditions of the entries, including constant matrices. An
important example is the identity matrix given by
The following lists matrices whose entries are subject to certain conditions. Many of
them apply to square matrices only, that is matrices with the same number of columns
and rows. The main diagonal of a square matrix is the diagonal joining the upper left
corner and the lower right one or equivalently the entries ai,i. The other diagonal is called
anti-diagonal (or counter-diagonal).
Explanation Notes, References
Name
A matrix whose entries are all either 0 or Synonym for (0,1)-matrix and
Binary matrix
1. logical matrix. [1]
Diagonally
aii| > Σj≠i |aij|.
dominant matrix
A matrix representation of a
permutation, a square matrix with exactly
Permutation matrix
one 1 in each row and column, and all
other elements 0.
The list below comprises matrices whose elements are constant for any given dimension
(size) of matrix. The matrix entries will be denoted aij. The table below uses the
Kronecker symbol δij for two integers i and j which is 1 if i = j and 0 else.
Symbolic
Name Explanation description of the Notes
entries
Matrix of
A matrix with all entries equal to one aij = 1.
ones
Multiplication by it
A matrix with ones on the
shifts matrix
Shift matrix superdiagonal or subdiagonal and aij = δi+1,j or aij = δi−1,j
elements by one
zeroes elsewhere.
position.
This matrix product is denoted AB. Unlike the product of numbers, matrix products are
not commutative, that is to say AB need not be equal to BA. A number of notions are
concerned with the failure of this commutativity. An inverse of square matrix A is a
matrix B (necessarily of the same dimension as A) such that AB = 1. Equivalently, BA =
1. An inverse need not exist. If it exists, B is uniquely determined, and is also called the
inverse of A, denoted A−1.
Idempotent
A matrix that has the property A² = AA = A.
matrix
Invertible A square matrix having a multiplicative inverse, that Invertible matrices form
matrix is, a matrix B such that AB = BA = I. the general linear group.
Nilpotent A square matrix satisfying Aq = 0 for some positive Equivalently, the only
matrix integer q. eigenvalue of A is 0.
Orthogonal A matrix whose inverse is equal to its transpose, A−1 They form the
matrix = AT. orthogonal group.
Two matrices A and B are similar if there exists an Compare with congruent
Similar matrix
invertible matrix P such that P−1AP = B. matrices.
Equivalently, A − I is
Unipotent
A square matrix with all eigenvalues equal to 1. nilpotent. See also
matrix
unipotent group.
Calculating inverse
The matrix containing minors of
Adjugate matrix matrices via Laplace's
a given square matrix.
formula.
See also
A square matrix containing the
Computer vision, network Euclidean
Distance matrix distances, taken pairwise, of a
analysis. distance
set of points.
matrix.
A matrix representing a
Special orthogonal group,
Rotation matrix rotational geometric
Euler angles
transformation.
• Derogatory matrix — a square n×n matrix whose minimal polynomial is of order less
than n.
• Moment matrix — a symmetric matrix whose elements are the products of common
row/column index dependent monomials.
• X-Y-Z matrix — a generalisation of the (rectangular) matrix to a cuboidal form (a 3-
dimensional array of entries).
substitution matrix
Main diagonal
In linear algebra, the main diagonal (sometimes leading diagonal or primary diagonal)
of a matrix A is the collection of cells Ai,j where i is equal to j.
The main diagonal of a square matrix is the diagonal which runs from the top left corner
to the bottom right corner. For example, the following matrix has 1s down its main
diagonal:
A square matrix like the above in which the entries outside the main diagonal are all zero
is called a diagonal matrix. The sum of the entries on the main diagonal of a matrix is
known as the trace of that matrix.
The main diagonal of a rectangular matrix is the diagonal which runs from the top left
corner and steps down and right, until the right edge is reached.
Modal matrix
In linear algebra, the modal matrix is used in the diagonalization process involving
eigenvalues and eigenvectors.
where X is n×1, A is n×n, and B is n×1. X typically represents the state vector, and U the
system input.
Specifically the modal matrix M is the n×n matrix formed with the eigenvectors of A as
columns in M. It is utilized in
where D is an n×n diagonal matrix with the eigenvalues of A on the main diagonal of D
and zeros elsewhere. (note the eigenvalues should appear left→right top→bottom in the
same order as its eigenvectors are arranged left→right into M)
for some positive integer k. The smallest such k is sometimes called the degree of N.
Examples
The matrix
is nilpotent, since M2 = 0. More generally, any triangular matrix with 0's along the main
diagonal is nilpotent. For example, the matrix
is nilpotent, with
Though the examples above have a large number of zero entries, a typical nilpotent
matrix does not. For example, the matrices
both square to zero, though neither matrix has zero entries.
Normal matrix
A complex square matrix A is a normal matrix if
A*A=AA*
where A* is the conjugate transpose of A. That is, a matrix is normal if it commutes with
its conjugate transpose.
Orthogonal matrix
In linear algebra, an orthogonal matrix is a square matrix with real entries whose
columns (or rows) are orthogonal unit vectors (i.e., orthonormal). Because the columns
are unit vectors in addition to being orthogonal, some people use the term orthonormal
to describe such matrices.
As a linear transformation, an orthogonal matrix preserves the dot product of vectors, and
therefore acts as an isometry of Euclidean space, such as a rotation or reflection. In other
words, it is a unitary transformation.
Pentadiagonal matrix
In linear algebra, a pentadiagonal matrix is a matrix that is nearly diagonal; to be exact,
it is a matrix in which the only nonzero entries are on the main diagonal, and the first two
diagonals above and below it. So it is of the form
It follows that a pentadiagonal matrix has at most 5n − 6 nonzero entries, where n is the
size of the matrix. Hence, pentadiagonal matrices are sparse. This makes them useful in
numerical analysis.
Polynomial matrix
A polynomial matrix or matrix polynomial is a matrix whose elements are univariate
or multivariate polynomials.
We can express this by saying that for a ring R, the rings Mn(R[X]) and (Mn(R))[X] are
isomorphic.
Since the column rank and the row rank are always equal, they are simply called the rank
of A. More abstractly, it is the dimension of the image of A. For the proofs, see, e.g.,
Murase (1960)[1], Andrea & Wong (1960)[2], Williams & Cater (1968)[3], Mackiw
(1995)[4]. It is commonly denoted by either rk(A) or rank A.
The rank of an matrix is at most min(m,n). A matrix that has a rank as large as possible
is said to have full rank; otherwise, the matrix is rank deficient.
Applications
One useful application of calculating the rank of a matrix is the computation of the
number of solutions of a system of linear equations. The system is inconsistent if the rank
of the augmented matrix is greater than the rank of the coefficient matrix. If, on the other
hand, ranks of these two matrices are equal, the system must have at least one solution.
The solution is unique if and only if the rank equals the number of variables. Otherwise
the general solution has k free parameters where k is the difference between the number
of variables and the rank. This theorem is due to Rouché and Capelli.
In control theory, the rank of a matrix can be used to determine whether a linear system is
controllable, or observable.
Row vector
In linear algebra, a row vector or row matrix is a 1 × n matrix, that is, a matrix
consisting of a single row:[1]
Skew-Hermitian matrix
In linear algebra, a square matrix with complex entries is said to be skew-Hermitian or
antihermitian if its conjugate transpose is equal to its negative.[1] That is, the matrix A is
skew-Hermitian if it satisfies the relation
where denotes the conjugate transpose of a matrix. In component form, this means that
for all i and j, where ai,j is the i,j-th entry of A, and the overline denotes complex
conjugation.
Skew-symmetric matrix
In linear algebra, a skew-symmetric (or antisymmetric or antimetric[1]) matrix is a
square matrix A whose transpose is also its negative; that is, it satisfies the equation:
or in component form, if :
Compare this with a symmetric matrix whose transpose is the same as the matrix
Symmetric matrix
In linear algebra, a symmetric matrix is a square matrix, A, that is equal to its transpose
The entries of a symmetric matrix are symmetric with respect to the main diagonal (top
left to bottom right). So if the entries are written as A = (aij), then
Every diagonal matrix is symmetric, since all off-diagonal entries are zero. Similarly,
each diagonal element of a skew-symmetric matrix must be zero, since each is its own
negative.
In linear algebra, a real symmetric matrix represents a self-adjoint operator over a real
inner product space. The corresponding object for a complex inner product space is a
Hermitian matrix with complex-valued entries, which is equal to its conjugate transpose.
Therefore, in linear algebra over the complex numbers, it is generally assumed that a
symmetric matrix refers to one which has has real-valued entries. Symmetric matrices
appear naturally in a variety of applications, and typical numerical linear algebra software
makes special accommodations for them.
Transpose
This article is about the Matrix Transpose operator. For other uses, see
Transposition
In linear algebra, the transpose of a matrix A is another matrix AT (also written A′, Atr or
t
A) created by any one of the following equivalent actions:
[AT]ij = [A]ji
If A is a m × n matrix then AT is a n × m matrix. The transpose of a scalar is the same
scalar.
Triangular matrix
In the mathematical discipline of linear algebra, a triangular matrix is a special kind of
square matrix where the entries either below or above the main diagonal are zero.
Because matrix equations with triangular matrices are easier to solve they are very
important in numerical analysis. The LU decomposition gives an algorithm to decompose
any invertible matrix A into a normed lower triangle matrix L and an upper triangle
matrix U.
is called lower triangular matrix or left triangular matrix, and analogously a matrix of
the form
The standard operations on triangular matrices conveniently preserve the triangular form:
the sum and product of two upper triangular matrices is again upper triangular. The
inverse of an upper triangular matrix is also upper triangular, and of course we can
multiply an upper triangular matrix by a constant and it will still be upper triangular. This
means that the upper triangular matrices form a subalgebra of the ring of square matrices
for any given size. The analogous result holds for lower triangular matrices. Note,
however, that the product of a lower triangular with an upper triangular matrix does not
preserve triangularity.
Tridiagonal matrix
In linear algebra, a tridiagonal matrix is a matrix that is "almost" a diagonal matrix. To
be exact: a tridiagonal matrix has nonzero elements only in the main diagonal, the first
diagonal below this, and the first diagonal above the main diagonal.
Unitary matrix
In mathematics, a unitary matrix is an n by n complex matrix U satisfying the condition
where is the identity matrix in n dimensions and is the conjugate transpose (also
called the Hermitian adjoint) of U. Note this condition says that a matrix U is unitary if
and only if it has an inverse which is equal to its conjugate transpose
A unitary matrix in which all entries are real is an orthogonal matrix. Just as an
orthogonal matrix G preserves the (real) inner product of two real vectors,
for all complex vectors x and y, where stands now for the standard inner product on
.
1. is unitary
2. is unitary
3. the columns of form an orthonormal basis of with respect to this inner
product
4. the rows of form an orthonormal basis of with respect to this inner product
5. is an isometry with respect to the norm from this inner product
6. U is a normal matrix with eigenvalues lying on the unit circle.
X–Y–Z matrix
An X–Y–Z matrix is a generalization of the concept of matrix to three dimensions.
Such matrices are helpful for example when considering grids in three dimensions, as in
computer simulations of three-dimensional problems.
Zero matrix
In mathematics, particularly linear algebra, a zero matrix is a matrix with all its entries
being zero. Some examples of zero matrices are
The set of m×n matrices with entries in a ring K forms a ring . The zero matrix
in is the matrix with all entries equal to , where is the additive
identity in K.
The zero matrix is the additive identity in . That is, for all it satisfies
There is exactly one zero matrix of any given size m×n having entries in a given ring, so
when the context is clear one often refers to the zero matrix. In general the zero element
of a ring is unique and typically denoted as 0 without any subscript indicating the parent
ring. Hence the examples above represent zero matrices over any ring.
The zero matrix represents the linear transformation sending all vectors to the zero
vector.