Application of Eigen Value & Eigen Vectror
Application of Eigen Value & Eigen Vectror
Application of Eigen Value & Eigen Vectror
INTRODUCTION
1.1 Back ground
Eigenvalues are often introduced in the context of linear algebra or matrix theory. Historically,
however, they arose in the study of quadratic forms and differential equations. In the 18th
century, Leonhard Euler studied the rotational motion of a rigid body, and discovered the
importance of the principal axes. Joseph-Louis Lagrange realized that the principal axes are the
eigenvectors of the inertia matrix. In the early 19th century, Augustin-Louis Cauchy saw how
their work could be used to classify the quadric surfaces, and generalized it to arbitrary
dimensions. Cauchy also coined the term racine caractéristique (characteristic root), for what is
now called eigenvalue; his term survives in characteristic equation.
Later, Joseph Fourier used the work of Lagrange and Pierre-Simon Laplace to solve the heat
equation by separation of variables in his famous 1822 book Théorie analytique de la chaleur.
Charles-François Sturm developed Fourier's ideas further, and brought them to the attention of
Cauchy, who combined them with his own ideas and arrived at the fact that real symmetric
matrices have real eigenvalues. This was extended by Charles Hermite in 1855 to what are now
called Hermitian matrices.
Around the same time, Francesco Brioschi proved that the eigenvalues of orthogonal matrices lie
on the unit circle, and Alfred Clebsch found the corresponding result for skew-symmetric
matrices. Finally, Karl Weierstrass clarified an important aspect in the stability theory started by
Laplace, by realizing that defective matrices can cause instability.
In the meantime, Joseph Liouville studied eigenvalue problems similar to those of Sturm; the
discipline that grew out of their work is now called Sturm–Liouville theory. Schwarz studied the
first eigenvalue of Laplace's equation on general domains towards the end of the 19th century,
while Poincaré studied Poisson's equation a few years later.
At the start of the 20th century, David Hilbert studied the eigenvalues of integral operators by
viewing the operators as infinite matrices. He was the first to use the German word eigen, which
means "own", to denote eigenvalues and eigenvectors in 1904, though he may have been
1
following a related usage by Hermann von Helmholtz. For some time, the standard term in
English was "proper value", but the more distinctive term "eigenvalue" is the standard today.
With the growth in importance of using computers to carry out numerical procedures in solving
mathematical models of the world, an area known as scientific computing or computational
science has taken shape. Because of the fact that the mathematical models can not usually be
solved explicitly, and numerical method to obtain approximate solutions are needed.
In short, the project will be designed to answer the following main questions related to the
application of Eigen values and Eigen vectors in Physics:
1.3 Objectives
The primary objective of this project is to introduce the application of Eigen values and Eigen
vectors in Physics.
2
CHAPTER TWO
PRELIMINARY CONCEPTS
2.1 Introduction
In Physical Science, an eigenvector corresponds to the real non zero eigenvalues which point in
the direction stretched by the transformation whereas eigenvalue is considered as a factor by
which it is stretched. In case, if the eigenvalue is negative, the direction of the transformation is
negative. As stated earlier on background (section 1.1), in general an operator operating on an
arbitrary state function will change it to another state function. It can be shown that, associated
with each operator representing a physically observable property, there is a unique set of
characteristic state functions that will not change when operated upon by the operator. These
state functions are called the ‘‘eigen functions’’ of this operator. Application of such an operator
on each of its eigen functions leads to a characteristic number, which is a real number (no
imaginary part), multiplying this eigen function. The characteristic number corresponding to
each eigen function of the operator is called the ‘‘eigen value’’ corresponding to this eigen
function.
3
of the correlation matrix determine the Q-methodologist's judgment of practical significance
(which differs from the statistical significance of hypothesis testing; cf. criteria for determining
the number of factors). More generally, principal component analysis can be used as a method
of factor analysis in structural equation modeling
denote it by ⃗
AB .
Definition 2. The length of the line segment is the magnitude of the vector.
Definition 3. Although ⃗
AA has zero length, and strictly speaking, no direction, it is convenient
4
Definition 5: A rectangular arrangement of mn numbers (real or complex) in to m horizontal
rows and n vertical columns enclosed by a pair of brackets [ ], such as
[ ]
a11 a12 . . . a1n
a21 a22 . . . a2 n
. . .
. . .
. . .
am 1 am 1 . . . amn
Definition 6. Two matrices A and Bare said to be equal, written A=B , if they are of the same
order and if all corresponding entries are equal.
[
5 1 0
For example, 2 3 4
=
] [
2+3 1 0
2 3 2×2 but
[9 2 ] ≠ 9
2 . ] []
Definition 7. A matrix that has exactly one row is called a row matrix. For example, the matrix
A=[ 3 2 −1 4 ] is a row matrix of order1×4 .
Definition 8. A matrix consisting of a single column is called a column matrix. For example,
[]
3
B= 1
the matrix 4 is a 3×1 column matrix.
Definition 9. An m× n matrix is said to be a square matrix of order n if m = n. That is, if it
has the same number of columns as rows.
[ ]
−3 4 6
2 1 3 2 −1
[ ]
For example, 5 2 −1 and 5 6 are square matrices of order 3 and 2 respectively.
5
A=( a ij ) a 11 , a22 , .. . , ann which lie on the diagonal
In a square matrix of order n, the entries
extending from the left upper corner to the lower right corner are called the main diagonal
[ ]
3 2 4
1 6 0
entries, or more simply the main diagonal. Thus, in the matrix C =
5 1 8 the
c =8 constitute the main diagonal.
entries c 11 =3 , c 22=6 and 33
Definition 10. A square matrix is said to be an upper (lower) triangular matrix if all entries
below (above) the main diagonal are zeros.
[ ]
7 0 0 0
[ ]
3 4 8 1 3 0 0
0 1 2 6 1 2 0
For example, 0 0 −3 and
−2 −4 8 6 are upper and lower triangular matrices,
respectively.
Definition 11:A square matrix is said to be diagonal if each of the entries not falling on the main
[ ]
5 0 0
0 0 0
For example, 0 0 7 is a diagonal matrix.
Definition 12. A square matrix is said to be identity matrix or unit matrix if all its main diagonal
entries are 1’s and all other entries are 0’s. In other words, a diagonal matrix whose all main
diagonal elements are equal to 1 is called an identity or unit matrix. An identity matrix of order n
is denoted by In or more simply by I.
[ ]
1 0 0
For example,
I3 = 0 1 0
0 0 1 is identity matrix of order 3.
I 2= 1 0 ( )
0 1 is identity matrix of
order 2.
6
a 1 x 1 + a2 x 2 +. . ..+a n x n =b (2.1)
Definition 14: A system of linear equations (or a linear system) is a collection of one or more
If
b 1=b 2=.. .=b m=0 then we say that the system is homogeneous. If b i≠0 for some
i {1,2,3, . . ., m} then the system is called non homogeneous. In matrix notation, the linear
system (2) can be written as AX = B where
[ ] [] []
a 11 a12 .. . a1 n x1 b1
a21 a22 .. . a2 n x2 b2
A= . , X= . B= .
. . .
. . .
am1 am2 .. . a mn xn bm
and
We call A the coefficient matrix of the system (2.2).
Observe that entries of the k-th column of A are the coefficients of the variable x k in (2.2).
Definition 15. The m×(n+1) matrix whose first n columns are the columns of A (the
coefficient matrix) and whose last column is B is called the augmented matrix of the system.
We denote it by [AB]. The augmented matrix determines the system (2) completely because it
contains all the coefficients and the constants to the right side of each equation in the system.
For example for the non homogeneous linear system
7
x 1 +3 x 2−x 3 =2
x 2 −2 x3 =4
−2 x 1−3 x 2 −3 x 3 =5 (2.3)
[ ] [ ]
4 3 −1 4 3 −1 2
A= 0 1 −2 0 1 −2 4
The matrix −2 −3 −3 is the coefficient matrix and −2 −3 −3 5 is the
augmented matrix.
(s 1 , s 2 , .. . , s n ) of real numbers that makes each of the equations in the system a true statement
when si is substituted for xi, i = 1,2, . . ., n. The set of all possible solutions is called the solution
set of the linear system. We say that two linear systems are equivalent if they have the same
solution set.
2.3 Multiplication of Matrices
While the operations of matrix addition and scalar multiplication are fairly straightforward, the
product AB of matrices A and B can be defined under the condition that the number of columns
of A must be equal to the number of rows of B. If the number of columns in the matrix A equals
the number of rows in the matrix B, we say that the matrices are conformable for the product
AB.
Because of wide use of matrix multiplication in application problems, it is important that we
learn it well. Therefore, we will try to learn the process in a step by step manner. We first begin
by finding a product of a row matrix and a column matrix.
Example 2.2: Given A = and B = , find the product AB.
Solution: The product is a 1 1 matrix whose entry is obtained by multiplying the
corresponding entries and then forming the sum.
AB =
= [ (2a + 3b + 4c)]
Example 2.3: Given A = and B = , find the product AB.
Solution: AB = = [ 10+18+28 ] =[ 56 ]
8
[ ]
1 −4
[ ]
c 11 c 12 c 13 c 14
AB= c21 c 22 c 23 c 24
c31 c 32 c 33 c 34
The entry c11 is obtained by summing the products of each entry in row 1
of A by the corresponding entry in column 1 of B, that is.
c 11 =(1)(−2 )+(−4 )(2)=−10 . Similarly, for C21 we use the entries in
row 2 of A and those in column 1 of B, that is C21 = (5) (-2) + (3) (2) = -4.
Also, C12 = (1) (4) + (-4) (7) = -24; C13 = (1) (1) + (-4) (3) = -11
C14 = (1) (6) + (-4) (8) = -26; C22 = (5) (4) + (3) (7) = 41
C23 = (5) (1) + (3 ) (3) = 14; C24 = (5) (6) + (3) (8) = 54
C31 = (0) (-2) + (2) (2) = 4; C32 = (0) (4) + (2) (7) = 14
C33 = (0) (1) + (2) (3) = 6: C34 = (0) (6) + (2) (8) = 16
[ ]
−10 −24 −11 −26
AB = −4 41 14 54
Therefore, 4 14 6 16
2.4. Eigen values and Eigenvectors of a matrix
Definition 17: Let A be an n x n matrix over a field K . A scalar in K is said to be an
Eigen value of A iff there is a non-zero vector X in K n such that
AX=X (2.4)
If is an Eigen value of A then any vector v in Kn satisfying (2.4) is called an Eigen vector of
A corresponding to . (In our case K is the set of all real numbers).
9
2.4.1 How to determine the Eigen values and corresponding Eigen vectors of a Matrix?
X is an eigenvector with Eigen value AX = X (A - In) X = 0 (2.5)
[ ] [ ] []
a11 −λ a12 . . a 1n x1 0
a21 a22−λ .. . . a 2n x2 0
. . . . .
or =
. . . . .
. . . . .
an 1 an 2 ann −λ xn 0
equation
|A−λI n| X = 0 .
Solution:
|A−λI 2| = 0 5 2 0 1[ ] [ ]
⇔ |1 6 − λ 1 0| = 0
1−λ 6
⇔| |= 0
5 2−λ ⇔ λ 2 −3 λ − 28 = 0
⇔ λ =7 or λ = −4
The corresponding eigenvectors can now be found as follow:
For = 7: (A – 7I2)X = 0
⇔ [[ 15 62 ] − 7 [ 10 01 ] ] [ xy ]= [ 00 ]
⇔
[ −65 −56 ] [ xy ]= [00 ] ⇔ y = x
10
1
[]
Hence, any vector of the type 1 , where is any real number, is an eigenvector corresponding
to the eigenvalue 7.
⇔
[5 6 ] [ y ] [ 0 ]
5 6 x
=
0
⇔ y =
−5
6
x
[ ]
3 −2 0
A = −2 3 0
Example 2.6 : Let
0 0 5 . Find the eigenvalues and the corresponding
eignvectors of A.
3−λ −2 0
| −2 3−λ 0 | = 0
Solution: Characteristic equation of A: 0 0 5−λ
(3 - ) (3 - ) (5 - ) - 4(5 - ) = 0
[(3 - )2 - 4] (5 - ) = 0
(2 - 6 + 5) ( - 5) = 0
( - 1) ( - 5)2 = 0
So, eigenvalues of A are: = 1 and = 5.
To find the corresponding eigen vectors, we substitute the values of in the equation
11
[ ][ ] [ ]
3−λ −2 0 x 0
−2 3− λ 0 y = 0
(A - I) X = 0. That is, 0 0 5− λ z 0 …. (*)
[ ][] []
2 −2 0 x 0
−2 2 0 y = 0
For = 1, (*) becomes: 0 0 4 z 0 x = y, z = 0
[] []
x 1
x x 1
X=
0 = 0 for any x in the set of real numbers.
[ ][ ] [ ]
−2 −2 0 x 0
−2 −2 0 y = 0
For = 5, (*) becomes: 0 0 0 z 0 x = -y
Thus, eigenvectors of A corresponding to eigenvalues 5 are vectors of the form.
[ ] [ ] [] [ ] []
x x 0 1 0
X = −x = −x + 0 = x −1 + z 0
z 0 z 0 1
12
CHAPTER THREE
CONCLUSION
The study was prepared to investigate the theoretical back ground of Eigen values and Eigen
vectors in Physics and its application to Physical science problems.
As mentioned, Eigen Values and Eigen vectors originally used to study principal axes of the
rotational motion of rigid bodies, eigenvalues and eigenvectors have a wide range of
applications, for example in stability analysis, vibration analysis, atomic orbitals, facial
recognition, and matrix diagonalization. Also application of Eigen values and Eigen vectors
occur in Physics, Mathematics, Applied Science and engineering fields. We have implemented
this method of finding Eigen value and Eigen vectors on total of 2 problems to show the
efficiency and applicability of the Eigen problems.
Generally, we have shown that the Eigen problems are practical method, easily adaptable on a
computer to solve such problems with a modest amount of problem preparation.
13
References
1. C.M. Bender ,S.A.Orszag, Advanced Mathematical Method for scientists and Engineers,
McGraw-Hill, New York , 1978
2. Grossman, S.I., (1993), Finite Mathematics, 2nd Ed., Wm.C.Brown Publishers.
3. L.E. El’sgol’ts,S.B. Norkin, Introduction to the Theory and Application of Differential
Equation with deviating Arguments, Academic press, New York , 1973
4. Dr. M. Shantha Kumar, Computer Based Numerical Analysis, First Edition, New Delhi,
1999
5. M. K. Jain, Numerical Solution of Differential Equations, Second Edition, India, 1984
6. Gerald C. F. and Wheatlly P. O., Applied numerical Analysis 5th ed, Edsion Wesley,1989
7. Richard L. Burden, Numerical Analysis, 2nd Ed, 1981.
8. P.A. Stock, Introduction to numerical analysis, third edition ,India ,1998
9. Frank Ayres, Theory and Differential Equations (Schuam’s outline series, 1981)
10. Monga, G.S., (1972), Mathematics And Statistics For Business And Economics, Vicas
Publishing House.
14