Application of Eigen Value & Eigen Vectror

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 14

CHAPTER ONE

INTRODUCTION
1.1 Back ground

Eigenvalues are often introduced in the context of linear algebra or matrix theory. Historically,
however, they arose in the study of quadratic forms and differential equations. In the 18th
century, Leonhard Euler studied the rotational motion of a rigid body, and discovered the
importance of the principal axes. Joseph-Louis Lagrange realized that the principal axes are the
eigenvectors of the inertia matrix. In the early 19th century, Augustin-Louis Cauchy saw how
their work could be used to classify the quadric surfaces, and generalized it to arbitrary
dimensions. Cauchy also coined the term racine caractéristique (characteristic root), for what is
now called eigenvalue; his term survives in characteristic equation.

Later, Joseph Fourier used the work of Lagrange and Pierre-Simon Laplace to solve the heat
equation by separation of variables in his famous 1822 book Théorie analytique de la chaleur.
Charles-François Sturm developed Fourier's ideas further, and brought them to the attention of
Cauchy, who combined them with his own ideas and arrived at the fact that real symmetric
matrices have real eigenvalues. This was extended by Charles Hermite in 1855 to what are now
called Hermitian matrices.

Around the same time, Francesco Brioschi proved that the eigenvalues of orthogonal matrices lie
on the unit circle, and Alfred Clebsch found the corresponding result for skew-symmetric
matrices. Finally, Karl Weierstrass clarified an important aspect in the stability theory started by
Laplace, by realizing that defective matrices can cause instability.

In the meantime, Joseph Liouville studied eigenvalue problems similar to those of Sturm; the
discipline that grew out of their work is now called Sturm–Liouville theory. Schwarz studied the
first eigenvalue of Laplace's equation on general domains towards the end of the 19th century,
while Poincaré studied Poisson's equation a few years later.

At the start of the 20th century, David Hilbert studied the eigenvalues of integral operators by
viewing the operators as infinite matrices. He was the first to use the German word eigen, which
means "own", to denote eigenvalues and eigenvectors in 1904, though he may have been

1
following a related usage by Hermann von Helmholtz. For some time, the standard term in
English was "proper value", but the more distinctive term "eigenvalue" is the standard today.
With the growth in importance of using computers to carry out numerical procedures in solving
mathematical models of the world, an area known as scientific computing or computational
science has taken shape. Because of the fact that the mathematical models can not usually be
solved explicitly, and numerical method to obtain approximate solutions are needed.

1.2 Statement of the problem


Eigen Values and Eigen vectors originally used to study principal axes of the rotational motion
of rigid bodies, eigenvalues and eigenvectors have a wide range of applications, for example in
stability analysis, vibration analysis, atomic orbitals, facial recognition, and matrix
diagonalization. Also application of Eigen values and Eigen vectors occur in Physics,
Mathematics, Applied Science and engineering fields.

In short, the project will be designed to answer the following main questions related to the
application of Eigen values and Eigen vectors in Physics:

 What we mean by Eigen values and Eigen vectors?


 How can we implement Eigen values and Eigen vectors application in Physics
problem?

1.3 Objectives

1.3.1 General objectives

The primary objective of this project is to introduce the application of Eigen values and Eigen
vectors in Physics.

1.3.2 Specific objectives


The Specific objectives and topics covered are:
 Define terminologies related to Eigen values and Eigen vectors problem.
 Understanding the meaning of Eigen values and Eigen vectors.
 Knowing how to implement Eigen values and Eigen vectors in Physics application

2
CHAPTER TWO

PRELIMINARY CONCEPTS

2.1 Introduction
In Physical Science, an eigenvector corresponds to the real non zero eigenvalues which point in
the direction stretched by the transformation whereas eigenvalue is considered as a factor by
which it is stretched. In case, if the eigenvalue is negative, the direction of the transformation is
negative. As stated earlier on background (section 1.1), in general an operator operating on an
arbitrary state function will change it to another state function. It can be shown that, associated
with each operator representing a physically observable property, there is a unique set of
characteristic state functions that will not change when operated upon by the operator. These
state functions are called the ‘‘eigen functions’’ of this operator. Application of such an operator
on each of its eigen functions leads to a characteristic number, which is a real number (no
imaginary part), multiplying this eigen function. The characteristic number corresponding to
each eigen function of the operator is called the ‘‘eigen value’’ corresponding to this eigen
function.

The eigendecomposition of a symmetric positive semidefinite (PSD) matrix yields an orthogonal


basis of eigenvectors, each of which has a nonnegative eigenvalue. The orthogonal
decomposition of a PSD matrix is used in multivariate analysis, where the sample covariance
matrices are PSD. This orthogonal decomposition is called principal component analysis (PCA)
in statistics. PCA studies linear relations among variables. PCA is performed on the covariance
matrix or the correlation matrix (in which each variable is scaled to have its sample
variance equal to one). For the covariance or correlation matrix, the eigenvectors correspond
to principal components and the eigenvalues to the variance explained by the principal
components. Principal component analysis of the correlation matrix provides an orthogonal
basis for the space of the observed data: In this basis, the largest eigenvalues correspond to the
principal components that are associated with most of the covariability among a number of
observed data.

Principal component analysis is used as a means of dimensionality reduction in the study of


large data sets, such as those encountered in bioinformatics. In Q methodology, the eigenvalues

3
of the correlation matrix determine the Q-methodologist's judgment of practical significance
(which differs from the statistical significance of hypothesis testing; cf. criteria for determining
the number of factors). More generally, principal component analysis can be used as a method
of factor analysis in structural equation modeling

2.2 Definitions and Examples of some important terminologies


n
Definition 1: Every pair of distinct points A and B in ℜ determines a directed line segment
with initial point at A and terminal point at B. We call such a directed line segment a vector and

denote it by ⃗
AB .
Definition 2. The length of the line segment is the magnitude of the vector.

Definition 3. Although ⃗
AA has zero length, and strictly speaking, no direction, it is convenient

to view it as a vector. It is called a zero or a null vector. It is often denoted by O .


Remark : Two vectors ⃗


AB and ⃗ AB = ⃗
CD will be considered to be equal (or equivalence), ⃗ CD
, if they have the same magnitude and direction.
Definition 4. Let V be a vector space over K . Elements v 1, v 2, … , v n of V are said to be linearly
independent if and only if the following condition is satisfied:
whenever a1, a2, … , a n are in K such that a 1 v 1+ a2 v 2+…+ an v n¿ 0 , then a i¿ 0 for alli=1 , 2 ,… , n .
Otherwise, the vectors are called linearly dependent.
Example 2.1: Consider v1 = (1, -1,1) , v2 = (2, 0, -1) and v3 = (2, -2, 2)
i) a1v1 + a2v2 = a1 (1, -1, 1) + a2 (2, 0, -1) = (a1 + 2a2, -a1, a1 – a2)
a1v1 + a2v2 = 0  a1 + 2a2 = 0, -a1 = 0 and a1 – a2 = 0
 a1 = 0 and a2 = 0
Hence v1 & v2 are linearly independent.
ii) a1v1 + a2v3 = a1 (1, -1, 1) + a2 (2, -2, 2)
= (a1 + 2a2, -a1 – 2a2 , a1 +2 a2)
a1v1 + a2v3 = 0  a1 + 2a2 = 0, -a1 – 2a2 = 0 and a1 +2 a2 = 0
 a1 = -2a2
Take a1 = 2 and a2 = -1, we get 2(1, -1, 1) + (-1) (2, -2, 2) = 0.
As the constants are not all equal to zero, v1 and v3 are linearly dependent

4
Definition 5: A rectangular arrangement of mn numbers (real or complex) in to m horizontal
rows and n vertical columns enclosed by a pair of brackets [ ], such as

[ ]
a11 a12 . . . a1n
a21 a22 . . . a2 n
. . .
. . .
. . .
am 1 am 1 . . . amn

is called an m× n (read “m by n”) matrix or a matrix of order m× n .


Parentheses ( ) are also commonly used to enclose numbers constituting matrices.

Definition 6. Two matrices A and Bare said to be equal, written A=B , if they are of the same
order and if all corresponding entries are equal.

[
5 1 0
For example, 2 3 4
=
] [
2+3 1 0
2 3 2×2 but
[9 2 ] ≠ 9
2 . ] []
Definition 7. A matrix that has exactly one row is called a row matrix. For example, the matrix
A=[ 3 2 −1 4 ] is a row matrix of order1×4 .

Definition 8. A matrix consisting of a single column is called a column matrix. For example,

[]
3
B= 1
the matrix 4 is a 3×1 column matrix.
Definition 9. An m× n matrix is said to be a square matrix of order n if m = n. That is, if it
has the same number of columns as rows.

[ ]
−3 4 6
2 1 3 2 −1
[ ]
For example, 5 2 −1 and 5 6 are square matrices of order 3 and 2 respectively.

5
A=( a ij ) a 11 , a22 , .. . , ann which lie on the diagonal
In a square matrix of order n, the entries
extending from the left upper corner to the lower right corner are called the main diagonal

[ ]
3 2 4
1 6 0
entries, or more simply the main diagonal. Thus, in the matrix C =
5 1 8 the
c =8 constitute the main diagonal.
entries c 11 =3 , c 22=6 and 33
Definition 10. A square matrix is said to be an upper (lower) triangular matrix if all entries
below (above) the main diagonal are zeros.

[ ]
7 0 0 0

[ ]
3 4 8 1 3 0 0
0 1 2 6 1 2 0
For example, 0 0 −3 and
−2 −4 8 6 are upper and lower triangular matrices,
respectively.
Definition 11:A square matrix is said to be diagonal if each of the entries not falling on the main

diagonal is zero. Thus a square matrix


A=( a ij ) a =0 for i≠ j .
is diagonal if ij

[ ]
5 0 0
0 0 0
For example, 0 0 7 is a diagonal matrix.

Definition 12. A square matrix is said to be identity matrix or unit matrix if all its main diagonal
entries are 1’s and all other entries are 0’s. In other words, a diagonal matrix whose all main
diagonal elements are equal to 1 is called an identity or unit matrix. An identity matrix of order n
is denoted by In or more simply by I.

[ ]
1 0 0

For example,
I3 = 0 1 0
0 0 1 is identity matrix of order 3.
I 2= 1 0 ( )
0 1 is identity matrix of
order 2.

Definition 13. A linear equation in the variables


x 1 , x2 ,. .. , x n over the real field ℜ is an
equation that can be written in the form

6
a 1 x 1 + a2 x 2 +. . ..+a n x n =b (2.1)

where b and the coefficients


a 1 , a2 , .. . , an are given real numbers.

Definition 14: A system of linear equations (or a linear system) is a collection of one or more

linear equations involving the same variables, say


x 1 , x2 ,. .. , x n .

Now consider a system of m linear equations inn unknowns


x 1 , x2 ,. .. , x n :

a 11 x 1 +a12 x 2 +.. .+a1 n x n =b1


a 21 x 1 +a 22 x 2 +. ..+a2n x n =b 2
.
.
.
a m 1 x1 +am 2 x 2 +. . .+amn x n =bm (2.2)

If
b 1=b 2=.. .=b m=0 then we say that the system is homogeneous. If b i≠0 for some
i {1,2,3, . . ., m} then the system is called non homogeneous. In matrix notation, the linear
system (2) can be written as AX = B where

[ ] [] []
a 11 a12 .. . a1 n x1 b1
a21 a22 .. . a2 n x2 b2
A= . , X= . B= .
. . .
. . .
am1 am2 .. . a mn xn bm
and
We call A the coefficient matrix of the system (2.2).

Observe that entries of the k-th column of A are the coefficients of the variable x k in (2.2).

Definition 15. The m×(n+1) matrix whose first n columns are the columns of A (the
coefficient matrix) and whose last column is B is called the augmented matrix of the system.
We denote it by [AB]. The augmented matrix determines the system (2) completely because it
contains all the coefficients and the constants to the right side of each equation in the system.
For example for the non homogeneous linear system

7
x 1 +3 x 2−x 3 =2
x 2 −2 x3 =4
−2 x 1−3 x 2 −3 x 3 =5 (2.3)

[ ] [ ]
4 3 −1 4 3 −1 2
A= 0 1 −2 0 1 −2 4
The matrix −2 −3 −3 is the coefficient matrix and −2 −3 −3 5 is the
augmented matrix.

Definition 16. A solution of a linear system in n-unknowns


x 1 , x2 ,..., x n is an n-tuple

(s 1 , s 2 , .. . , s n ) of real numbers that makes each of the equations in the system a true statement
when si is substituted for xi, i = 1,2, . . ., n. The set of all possible solutions is called the solution
set of the linear system. We say that two linear systems are equivalent if they have the same
solution set.
2.3 Multiplication of Matrices
While the operations of matrix addition and scalar multiplication are fairly straightforward, the
product AB of matrices A and B can be defined under the condition that the number of columns
of A must be equal to the number of rows of B. If the number of columns in the matrix A equals
the number of rows in the matrix B, we say that the matrices are conformable for the product
AB.
Because of wide use of matrix multiplication in application problems, it is important that we
learn it well. Therefore, we will try to learn the process in a step by step manner. We first begin
by finding a product of a row matrix and a column matrix.
Example 2.2: Given A = and B = , find the product AB.
Solution: The product is a 1 1 matrix whose entry is obtained by multiplying the
corresponding entries and then forming the sum.
AB =
= [ (2a + 3b + 4c)]
Example 2.3: Given A = and B = , find the product AB.

Solution: AB = = [ 10+18+28 ] =[ 56 ]

8
[ ]
1 −4

Example 2.4. Find the product AB if


A= 5 3
0 2 and
B=
−2 4 1 6
2 7 3 8 [ ]
Solution: Since the number of columns of A is equal to the number of rows of B, the
product AB=C is defined. Since A is 3×2 and B is 2×4 , the product AB
will be 3×4

[ ]
c 11 c 12 c 13 c 14
AB= c21 c 22 c 23 c 24
c31 c 32 c 33 c 34

The entry c11 is obtained by summing the products of each entry in row 1
of A by the corresponding entry in column 1 of B, that is.
c 11 =(1)(−2 )+(−4 )(2)=−10 . Similarly, for C21 we use the entries in

row 2 of A and those in column 1 of B, that is C21 = (5) (-2) + (3) (2) = -4.
Also, C12 = (1) (4) + (-4) (7) = -24; C13 = (1) (1) + (-4) (3) = -11
C14 = (1) (6) + (-4) (8) = -26; C22 = (5) (4) + (3) (7) = 41
C23 = (5) (1) + (3 ) (3) = 14; C24 = (5) (6) + (3) (8) = 54
C31 = (0) (-2) + (2) (2) = 4; C32 = (0) (4) + (2) (7) = 14
C33 = (0) (1) + (2) (3) = 6: C34 = (0) (6) + (2) (8) = 16

[ ]
−10 −24 −11 −26
AB = −4 41 14 54
Therefore, 4 14 6 16
2.4. Eigen values and Eigenvectors of a matrix
Definition 17: Let A be an n x n matrix over a field K . A scalar  in K is said to be an
Eigen value of A iff there is a non-zero vector X in K n such that
AX=X (2.4)
If  is an Eigen value of A then any vector v in Kn satisfying (2.4) is called an Eigen vector of
A corresponding to . (In our case K is the set of all real numbers).

9
2.4.1 How to determine the Eigen values and corresponding Eigen vectors of a Matrix?
X is an eigenvector with Eigen value   AX = X (A - In) X = 0 (2.5)

[ ] [ ] []
a11 −λ a12 . . a 1n x1 0
a21 a22−λ .. . . a 2n x2 0
. . . . .
or =
. . . . .
. . . . .
an 1 an 2 ann −λ xn 0

X = 0 is the trivial solution of (3.2). Further solutions will exist iff


|A−λI n| = 0 .

Hence, solving the equation


|A−λI n| = 0 gives the Eigen value(s) of A.
For each eigenvalue , the corresponding eigenvector is found by substituting  back into the

equation
|A−λI n| X = 0 .

Note: i) The polynomial


|A−λI n|is called the characteristic polynomial of A.

ii) The equation


|A−λI n| = 0 is called the characteristic equation.

Example 2.5: Let


A =
[15 62 ] . Find the eigenvalues and the corresponding eignvectors of A.

Solution:
|A−λI 2| = 0 5 2 0 1[ ] [ ]
⇔ |1 6 − λ 1 0| = 0

1−λ 6
⇔| |= 0
5 2−λ ⇔ λ 2 −3 λ − 28 = 0
⇔ λ =7 or λ = −4
The corresponding eigenvectors can now be found as follow:

For  = 7: (A – 7I2)X = 0
⇔ [[ 15 62 ] − 7 [ 10 01 ] ] [ xy ]= [ 00 ]

[ −65 −56 ] [ xy ]= [00 ] ⇔ y = x

10
1
[]
Hence, any vector of the type  1 , where  is any real number, is an eigenvector corresponding
to the eigenvalue 7.

For  = -4: (A + 4I2)X = 0



[[ ] [ ]] [ ] [ ]
1
5
6
2
+ 4
1
0
0
1
x
y
=
0
0


[5 6 ] [ y ] [ 0 ]
5 6 x
=
0
⇔ y =
−5
6
x

Hence, any vector of the type


β
[ ]
−5
6
1

, where  is any real number, is an eigenvector


corresponding to the eigenvalue –4.
Theorem 2.1: If A is an n x n matrix, then the following statements are equivalent:
i)  is an eigenvalue of A.
ii) There is a non-zero vector X  Kn such that AX = X.
iii) The system of equations (A - I)X = 0 has non-trivial solutions.
iv)  is a solution of the characteristic equation det (A - I) in K.

[ ]
3 −2 0
A = −2 3 0
Example 2.6 : Let
0 0 5 . Find the eigenvalues and the corresponding
eignvectors of A.
3−λ −2 0
| −2 3−λ 0 | = 0
Solution: Characteristic equation of A: 0 0 5−λ
 (3 - ) (3 - ) (5 - ) - 4(5 - ) = 0
 [(3 - )2 - 4] (5 - ) = 0
 (2 - 6 + 5) ( - 5) = 0
 ( - 1) ( - 5)2 = 0
So, eigenvalues of A are:  = 1 and  = 5.
To find the corresponding eigen vectors, we substitute the values of  in the equation

11
[ ][ ] [ ]
3−λ −2 0 x 0
−2 3− λ 0 y = 0
(A - I) X = 0. That is, 0 0 5− λ z 0 …. (*)

[ ][] []
2 −2 0 x 0
−2 2 0 y = 0
For  = 1, (*) becomes: 0 0 4 z 0  x = y, z = 0

Thus, the eigenvectors corresponding to eigenvalue 1 are vectors of the form:

[] []
x 1
x x 1
X=
0 = 0 for any x in the set of real numbers.

[ ][ ] [ ]
−2 −2 0 x 0
−2 −2 0 y = 0
For  = 5, (*) becomes: 0 0 0 z 0  x = -y
Thus, eigenvectors of A corresponding to eigenvalues 5 are vectors of the form.

[ ] [ ] [] [ ] []
x x 0 1 0
X = −x = −x + 0 = x −1 + z 0
z 0 z 0 1

12
CHAPTER THREE

CONCLUSION
The study was prepared to investigate the theoretical back ground of Eigen values and Eigen
vectors in Physics and its application to Physical science problems.

As mentioned, Eigen Values and Eigen vectors originally used to study principal axes of the
rotational motion of rigid bodies, eigenvalues and eigenvectors have a wide range of
applications, for example in stability analysis, vibration analysis, atomic orbitals, facial
recognition, and matrix diagonalization. Also application of Eigen values and Eigen vectors
occur in Physics, Mathematics, Applied Science and engineering fields. We have implemented
this method of finding Eigen value and Eigen vectors on total of 2 problems to show the
efficiency and applicability of the Eigen problems.

Generally, we have shown that the Eigen problems are practical method, easily adaptable on a
computer to solve such problems with a modest amount of problem preparation.

13
References
1. C.M. Bender ,S.A.Orszag, Advanced Mathematical Method for scientists and Engineers,
McGraw-Hill, New York , 1978
2. Grossman, S.I., (1993), Finite Mathematics, 2nd Ed., Wm.C.Brown Publishers.
3. L.E. El’sgol’ts,S.B. Norkin, Introduction to the Theory and Application of Differential
Equation with deviating Arguments, Academic press, New York , 1973
4. Dr. M. Shantha Kumar, Computer Based Numerical Analysis, First Edition, New Delhi,
1999
5. M. K. Jain, Numerical Solution of Differential Equations, Second Edition, India, 1984
6. Gerald C. F. and Wheatlly P. O., Applied numerical Analysis 5th ed, Edsion Wesley,1989
7. Richard L. Burden, Numerical Analysis, 2nd Ed, 1981.
8. P.A. Stock, Introduction to numerical analysis, third edition ,India ,1998
9. Frank Ayres, Theory and Differential Equations (Schuam’s outline series, 1981)
10. Monga, G.S., (1972), Mathematics And Statistics For Business And Economics, Vicas
Publishing House.

14

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy