0% found this document useful (0 votes)
140 views

The Properties of Determinants:: 11 22 II NN

1. The determinant of the identity matrix is 1, and the determinant changes sign when two rows are exchanged. 2. The determinant is a linear function of each row separately, and if two rows are equal then the determinant is 0. 3. Eigenvalues and eigenvectors are properties of matrices where eigenvalues are scalar values and eigenvectors remain unchanged when the matrix is applied to them. The eigenvalues and eigenvectors can be used to diagonalize matrices.

Uploaded by

Vivek Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
140 views

The Properties of Determinants:: 11 22 II NN

1. The determinant of the identity matrix is 1, and the determinant changes sign when two rows are exchanged. 2. The determinant is a linear function of each row separately, and if two rows are equal then the determinant is 0. 3. Eigenvalues and eigenvectors are properties of matrices where eigenvalues are scalar values and eigenvectors remain unchanged when the matrix is applied to them. The eigenvalues and eigenvectors can be used to diagonalize matrices.

Uploaded by

Vivek Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

The properties of Determinants:

1. The determinant of the n × n identity matrix is 1.

det(I) = 1

2. The determinant changes sign when two rows are exchanged.


   
a b c d
det = −det
c d a b

3. The determinant is a linear function of each row separately.



ta tb
= t a b and a + b c + d = a c + b d


c d c d e f e f e f

∗ Other rows have to remain same in each step.

4. If two rows are equal , then det(A) = 0. [Exchange the row prove].

5. Subtracting a multiple of one row from another row, leaves the det unchanged.

0
a b a + lc b + ld a b c d
a b

c d = c c d + l c d = c d

d

6. A matrix with a row zero renders the det = 0.

7. If A is triangular, the det(A) = product of the diagonal entries.

det(A) = a11 a22 · · · aii · · · ann

8. For singular A, det(A) = 0 and invertible A, det(A) 6= 0.



all the pivot entries will be non zero.

9. det(AB) = det(A)det(B) ⇒ det(A)det(A−1 ) = det(I) = 1.

10. det(A) = det(AT ) (A = LU → det(L) = 1 and det(U ) = det(U T ).

∗ For any triangular matrix, det(At ) = det(AT


t ), since the diagonal entries remain same.

Eigen values and Eigen vectors:-


• A vector x remains unaltered (the direction) even after being transformed by the matrix A.

Ax = λx

where, x is called the eigen vector and λ is called the eigen value.

1
• To determine, eigen values and eigen vectors

Av̄ = λv̄
⇒ (A−λI)v̄ = 0

⇒ hence the nullspace of (A − λI) is not only zero. Hence, (A − λI) is singular.

– |A − λI| = 0 → characteristic equation.


– Solution of the characteristic equation yields the eigen values λi (may or may not be
distinct.
– For each eigen value λi the eigen vectors are obtained

(A − λi I)vi = 0
 
Example: 1 4 1 − λ 4
A= → =0
2 3 2 3 − λ
⇒ (1 − λ)(3 − λ) − 8 = 0
⇒ 3 − 4λ + λ2 − 8 = 0
⇒ λ2 − 4λ − 5 = 0
⇒ (λ − 5)(λ + 1) = 0
λ = 5, −1
      
−4 4 x1 0 1
λ1 = 5 ⇒ = ⇒ v1 =
2 −2 x2 0 1
     
2 4 0 −2
λ2 = −1 ⇒ v2 = ⇒ v2 =
2 4 0 1
   
1 −2
The eigen vectors are, v 1 = and v 2 =
1 1

• Av = λv ⇒ after transformation v only gets scaled up or down (there is no change in the


direction).

• Now, A · Av = Aλv = λAv = λ2 v ⇒ A2 v = λ2 v

• Also, A−1 v = λ−1 v if Av = λv.

• (A + cI)v = (λ + c)v → eigen value gets shifted by c.

• det(A) = λ1 λ2 · · · λn = product of the eigen values of A, and


trace(A) =
 λ1 + λ2 + · · · + λn = sum of the eigen values of A.
1 4
For, A = ⇒ trace(A) = 4 = λ1 + λ2
2 3
det(A) = −5 = λ1 λ2

– These two features can be used to check the calculated eigen values.

• Say, through elimination one gets U from A →

2
– the eigen values of U are basically the diagonal/pivot entries. However, these are not
the eigen values of A. The elimination does not preserve the eigen values.

• The eigen values/vectors can be imaginary.

• The eigen values of a symmetric matrix S are real numbers. → S T = S

• The eigen values of an skew-symmetric matrix A are purely imaginary. → AT = −A

• The λ’s of an orthogonal matrix are imaginary having absolute value as 1, i.e., |λ| = 1 →
AAT = I

• The λ’s of AB is not the product of the individual eigen values of A and B. Because they
do not share same λ.

• Similarly, eigen values of (A + B) are not the sum of the individual eigen values of A and B.

Diagonalization:
• Av i = λi v i → λi is the eigen value corresponding to the eigen vector v i .

– If a matrix V is formed whose columns are the eigen vectors of A, then V is called eigen
vector matrix of A.
  
| | | | | |
AV = a1 a2 · · · an  v 1 v 2 · · · v n 
| | | | | |
   
| | | | | |
= A v 1 v 2 · · · v n  = λ1 v 1 λ2 v 2 · · · λn v n 
| | | | | |
 
  λ1 0 · · · 0
| | |  0 λ ··· 0
2
= v 1 v 2 · · · v n   ..
 
.. .. 
. . .
| | |
0 0 ··· λn
=VΛ

diagonal matrix with eigen values as diagonal entry.

⇒ AV = V Λ ← all are matrices.


If V contains linearly independent eigen vectors vi ’s, then,

A = V ΛV −1
or, Λ = V −1 AV

Eigen vector matrix V can diagonalize matrix ‘A’.

3
• Thus, the eigen vector matrix (V ) containing the linearly independent eigen vector (vi ) as
columns, diagonalize the matrix A. The diagonal matrix contains the eigen values as entries.

Λ = V −1 AV

A is diagonlizable only if the eigen vectors are linearly independent.

• Now, since, A = V ΛV −1
⇒ A2 = (V ΛV −1 )(V ΛV −1 )
A2 = V Λ2 V −1
Thus, Ak = V Λk V −1
 
λk1 0 0 0
 0 λk 0 0
2
Λk =  ..
 
.. . . .. 
. . . .
0 0 · · · λkn

• Thus, if λ is the eigenvalue of A, then λk is the eigen value of Ak ; where as the eigen vectors
remain same.

• If the eigen vectors are not linearly independent, A cannot be diagonalized.

• Now, if the eigen values of A are all distinct, i.e., different from each other, then the eigen
vectors are linearly independent and thus eigen vector matrix is invertible. Hence, A is
diagonalizable.

• A matrix having repeated eigen values, does not have enough eigen vectors and hence are not
diagonalizable.

Example: 
3 2

A= → λ1 + λ2 = 7, λ1 λ2 = 10
1 4
Characteristic polynomial: (3 − λ)(4 − λ) − 2 = 0
⇒ 12 − 7λ + λ2 − 2 = 0
⇒ λ2 − 7λ + 10 = 0
⇒ (λ − 5)(λ − 2) = 0 ⇒ λ = 2, 5
     
1 2   0 −2
λ1 = 2 ⇒ v1 = ⇒ v1 =
1 2 0 1
     
−2 2 0 1
λ2 = 5 ⇒ v = ⇒ v2 =
1 −1 2 0 1
 
−2 1
V =
1 1
 
−1 1 1 −1
V =
−3 −1 −2
 
1 −1 1
=
3 1 2

4
   
Lets calculate → −1 1 −1 1 3 2 −2 1
V AV =
3 1 2 1 4 1 1
      
1 −1 1 −4 5 1 6 0 2 0
= = =
3 1 2 2 5 3 0 15 0 5

Example: 1 −1
 
A= ⇒ (1 − λ)(−1 − λ) + 1 = 0
1 −1
⇒ −1 + λ2 + 1 = 0 ⇒ λ = 0, 0

⇒ Repeated eigen values → 0 and 0.

   
1 −1 1
v=0⇒ v= ← is in the null space of A.
1 −1 1
 
1
Only one eigen vector v = , there is no other. Hence, the matrix cannot be diagonalized.
  1
0 1
Same is true for →
0 0
NOTE:- No connection between invertibility and diagonalizability.

⇒ If one eigen value is zero, it is non-invertible.

⇒ If the eigen vectors are not linearly independent (few eigen vectors), then the matrix is not
diagonlizable.

Proof:- Independent eigen vectors if distinct eigen values →


Say,
c1 v 1 + c2 v 2 + c3 v 3 + · · · + ci vi + · · · + cn vn = 0 (1)

Now, multiply by A →

c1 Av1 + c2 Av2 + c3 Av3 + · · · + ci Avi + · · · + cn Avn = 0


⇒ c1 λ1 v1 + c2 λ2 v2 + c3 λ3 v3 + · · · + ci λi vi + · · · + cn λn vn = 0 (2)

Eqn.(2) − Eqn.(1) × λn →

:0

c1 (λ1 − λn )v1 + c2 (λ2 − λn )v2 + · · · + ci (λi − λn )vi + · · · + 
cn 
n − λn )vn = 0
(λ

n−1
X
⇒ ci (λi − λn )vi = 0 (3)
i=1
n−1
X
⇒ ci (λi − λn )Avi = 0
i=1
n−1
X
⇒ ci (λi − λn )λi vi = 0 (4)
i=1

5
Eqn.(4) − Eqn.(3) × λn−1 →
n−1
X
ci (λi − λn )(λi − λn−1 )vi = 0
i=1

If we continue till i = 2 → we are left with

c1 (λ1 − λn )(λ1 − λn−1 ) · · · (λ1 − λ3 )(λ1 − λ2 )v1 = 0

Since λi ’s are distinct, c1 = 0 only satisfy the condition. Similarly, it can be shown for every c1 = 0 .
Thus, eigen vectors are independent.

Multiplicity
• It represents the repetition of the eigen values

• Geometric Multiplicity : no of independent eigen vectors and is equal to the dimension of the nullspace
of (A − λI)

• Algebraic Multiplicity : no of repetitions in the eigen values, i.e., in the solution of det(A − λI) = 0
 
1 −1
Example: A = =⇒ has both the eigen values as 0. Hence, AM=2, GM=1,since it
1 −1  
1
has only one eigen vector
1
Example: Suppose λ = 2, 2, 2 =⇒ then AM=3 but GM can be 1 or 2 or 3 ??

? When GM<AM → then there is a shortage of eigen vectors and hence A is not diago-
nizable

Explanation: AM & GM: Suppose A is having three eigen values identical out of n eigen
values. Now while substituting this repeated eigen values, say λi to find corresponding eigen
vector →
(A − λi I)v̄i = 0 =⇒ actually we are looking for the nullspace of (A − λi I). Now if (A − λi I)
has three free columns (after elimination), hence (A − λi I) has three independent vectors to
span its nullspace; thus having three independent vectors as a solution of (A − λi I)v̄i = 0.
Hence GM=3 and ‘A’ is diagonizable
 
1 1 0
Ex1 : A = 0 1 1 =⇒ |A − λi I| = 0 =⇒ (1 − λ)(1 − λ2 ) + 1 × 0 + 0 × 1 = 0
0 0 1
=⇒ (1 − λ)3 = 0
=⇒ λ = 1, 1, 1
Thus AM=3
 
0 1 0
0
A = (A − λi ) = 0
 0 1
0 0 1

6
     
0 1 0 0 1 0 v1 0
A0 v = 0 =⇒ A0 = 0 0 1 =⇒ 0 0 1 v 2  = 0
0 0 1 0 0 0 v3 0
↑ ↑
pivot columns
 
1
0
N(A ) = c 0

0
GM = dim (N(A0 )) = 1

 
1 1 0
Ex2 : B = 0 1 0 =⇒ |B − λI| = 0 =⇒ λ = 1, 1, 1 =⇒ AM = 3
0 0 1
 
0 1 0
B 0 = 0 0 0 =⇒ B 0 v̄ = 0
0 0 0
   
1 0
v̄1 = 0 and v̄2 = 0
0 1
GM = dim(N(B 0 )) = 2

 
1 0 0
Ex3 : C = 0 1 0 =⇒ λ = 1, 1, 1 =⇒ AM = 3
0 1 0
       
0 0 0 1 0 0
0 0
C = 0
 0 0 =⇒ N (C ) = c1 0 + c2 1 + c3 0
     
0 0 0 0 1 1
GM = dim(N(C 0 )) = 3

Solving an eigen value problem


• Ax̄ = λx̄ =⇒ (A − λI)x̄ = 0

• Find eigen values from the characteristic polynomial, |A − λI| = 0

• For each eigen value find the eigen vector from (A − λi I)x̄i = 0

• The solution of the above equation yields the nullspace of (A − λi I)

• The set of all vectors x̄i , that satisfy the (A − λi I)x̄i = 0, forms a vector space E(λi , A),
called as eigen space associated with λi

λi → x̄i1 and x̄i2 =⇒ x̄ = c1 x̄i1 + c2 x̄i2


Ax̄ = c1 Ax̄i1 + c2 Ax̄i2 = λi (c1 x̄i1 + c2 x̄i2 ) = λi x̄

7
? Thus eigen space of A corresponding to the eigen value λ is basically the solution space of
(A − λI)x̄ = 0 or the nullspace of (A − λI)

Example: Find the bases of the Eigenspaces of A


 
0 0 −2
A = 1 2 1 
1 0 3

Solution: Characteristic Polynomial :


 
−λ 0 −2
 1 2−λ 1  = 0 =⇒ −λ{(2 − λ)(3 − λ)} − 2{−(2 − λ)} = 0
1 0 3−λ
=⇒ (2 − λ) (2 − λ(3 − λ)) = 0
=⇒ (2 − λ)(2 − 3λ + λ2 ) = 0
=⇒ (2 − λ)(λ − 2)(λ − 1) = 0
=⇒ λ = 1, 2, 2

 
0 0 −2
A = 1 2 1 
1 0 3

 
−1 0 −2
λ1 = 1, (A − I)x̄1 = 0 =⇒  1 1 1  x̄1 = 0
1 0 2
x̄1 is the nullspace of (A − I)
   
−1 0 −2 -1 0 −2
 1 1 1 =  0 1 −1

1 0 2 0 0 0

1 free column
 
−2
x̄1 = 1 

1
 
−2
N([A − I]) = c  1 
1
dim(N([A − I]) = 1
 
−2
Eigen Space of A corresponding to λ = 1 is one-dimensional and E(1, A) = c  1 
1

8
λ2,3 = 2 =⇒ (A − 2I)x̄2 = 0
x1 x2 x3
   
−2 0 −2 −2 0 −2
(A − 2I) =⇒  1 0 1  =  0 0 0
1 0 1 0 0 0
↑ ↑
2 free column
   
0 −1
N(A − 2I) = c1 1 + c2 0  =⇒ dim(N(A − 2I) = 2
  
0 1

Eigen space of A corresponding to λ = 2 → E(2, A) is 2-D subspace and


   
0 −1
E(2, λ) = c1 1 + c2  0 
0 1

Similar Matrices
One can say a matrix B is similar to A , if there exist an invertible matrix P such that

B = P −1 AP
=⇒ A = P BP −1 if (P −1 = Q)
A = Q−1 BQ

Then A is also similar to B. Similarity implies →

• they have same determinant.

• if ‘A’ is invertible then so is ‘B’.

• they have same rank and nullity.

• they have same trace, characteristic polynomial, eigen value and eigen space corresponding
to same eigen value

? A matrix is diagonizable, if it is similar to a diagonal matrix,

B = P −1 AP where, B is diagonal

9
Symmetric matrix: S = S T
• Numbers to be saved = n diagonal entries + triangular part above diagonal
= n + (n − 1) + (n − 2) + (n − 3) + .............. + 1 + 0
↑ ↑ ↑ ↑ ↑
1st row 2nd row 3rd row (n − 1)th row nth row
= 1 + 2 + 3 + ......... + (n − 3) + (n − 2) + (n − 1) + n
n(n + 1)
=
2
• The eigen values and eigen vectors of a real symmetric matrix are real.
• The eigen vectors of ‘S’ are orthogonal to each other, which can be used to diagonalize the
matrix ‘S’.
Proof:
Say ‘S’ is a real symmetric matrix and λ and v are the imaginary eigen values.
Thus, Sv = λv ← λ and v are imaginary (5)
Since, ‘S’ is a real and λ̄v̄ is the complex conjugate of λv, then →
S v̄ = λ̄v̄ (6)
where, λ̄ is complex conjugate of λ and v̄ is complex conjugate of v.
Taking transpose of Eqn.(6) →
v̄ T S T = v̄ T λ̄ (7)
T T
v̄ Sv = v̄ λ̄v (8)
Taking dot product of Eqn.(5) with v̄, one gets →
v̄ T Sv = v̄ T λv (9)
Left side of Eqn.(8) and Eqn.(9) are same and v̄ T v = length squared (real)
Hence, λ = λ̄ (only possible when complex part are zero)
a + ib = a − ib =⇒ b = 0

• Say the eigen vector matrix is ‘V ’, containing eigen vectors of A as columns.


Thus, A = V ΛV −1
Now, AT = (V −1 )T λT V T
Since, A = AT and Λ = ΛT
⇒ V −1 = V T or V TV = I
Hence, the columns of V are orthonormal. Thus V is an orthogonal matrix Q.
Thus, the eigen vectors of a symmetric matrix are orthogonal and hence can be made or-
thonormal. Thus,

S = QΛQ−1 = QΛQT

10
• A symmetric matrix S can be factorized as QΛQT , where, Λ contain eigen values as diagonal
entries and Q contain orthogonal eigen vectors of S.

Example:
 
1 2
S= =⇒ λ1 + λ2 = 5 and λ1 λ2 = 0
2 4
λ2 − 5λ + 0 = 0
=⇒ λ = 0, 5
   
−2 1
λ1 = 0, v1 = and λ2 = 5, v2 =
1 2
   
1 −2 1 1
q1 = √ and q2 = √
5 1 5 2
 
1 −2 1
Q= √
5 1 2
   
1 −2
T 1 0 0 −2 1
QΛQ =
5 1 2 0 5 1 2
      
1 −2 1 0 0 1 5 10 1 2
= = = =A
5 1 2 5 10 5 10 20 2 4

Spectral Theorem
Every symmetric matrix has the factorization with real eigen values in Λ and orthogonal eigen
values in column of Q.

S = QΛQT (Note −→ QT = Q−1 )


% -
eigen vector matrix eigen value matrix (diagonal)

→ Every symmetric matrix has real λ’s and orthogonal v’s.


• A 2×2 symmetric matrix can be seen as →
S = QΛQT
    
  λ1 q1T   λ1 q1T
= q 1 q2 = q1 q2
λ2 q2T λ2 q2T
↑ ↑ ↑
rotation stretch rotate back
= λ1 q1 q1T + λ2 q2 q2 T


11
• Every symmetric matrix −→ S = λ1 q1 q1T + λ2 q2 q2T + ....... + λn qn qnT
(spectral theorem)

• For real matrices (A, ‘non symmetric’) the eigen values and eigen vectors appear in conjugate
pairs.
 
cos θ − sin θ
Example: A = ;
sin θ cos θ
(cos θ − λ)(cos θ − λ) + sin2 θ = 0
⇒ λ2 − 2 cos θ + 1 = 0

+2 cos θ ± 4 cos2 θ − 4
λ=
2
λ = cos θ ± i sin θ
⇒ λ1 = cos θ + i sin θ
λ2 = cos θ − i sin θ

   
−i sin θ − sin θ 1
v =0 =⇒ v1 =
sin θ −i sin θ 1 −i
   
i sin θ − sin θ 1
v2 = 0 =⇒ v2 =
sin θ i sin θ i

NOTE : In case of orthogonal matrix Q, |λ|=1

• As discussed earlier, that with elimination the values are not preserved. It implies that though
pivot of U are the eigen value of U , but are not the eigen value of A.

– Product of the pivot are equal to the determinant of U as well as the determinant of A.

• In case of symmetric matrix S, the pivots and eigen values have the same signs.
 
1 3
S= =⇒ λ2 − 2λ − 8 = 0
3 1
=⇒ (λ − 4)(λ + 2) = 0
=⇒ λ = 4, 2
 
a b
=⇒ (λ − a)(λ − d) − cb = 0
c d
=⇒ λ2 − (a + d)λ + (ad − bc) = 0
=⇒ λ2 − tr(A)λ + det(λ) = 0

 
1 3
U= =⇒ pivots 1, −8
0 −8

S has one positive eigen value and one negative eigen value and one positive pivot and one
negative pivot.

• What happens if eigen value of S is repeated (Algebraic multiplicity is more than 1)

12
– Repeated eigenvalues in A, sometimes yield shortage of independent eigen vectors.
– But it is not the case with symmetric matrices. There are always enough eigen vectors
to diagonalize. S = S T
• If S has repeated eigen values, one can always come up with n orthogonal vectors (eigen)
which diagonalize S.

S = QΛQT or, Λ = QT SQ

Proof:
To prove that S can always be diagonalized, we need to take help of Schur’s theorem.
Schur’ds theorem says that any square matrix A can be ‘triangularized’ by →

A = QT Q−1

Here, Q is orthogonal and T is upper triangular.


If eigen values are complex then Q−1 = Q̄T
A can have repeated eigen value.
Now, we need to prove, T is diagonal when A is symmetric.

S = QT Q−1 ←− S has only real eigenvalues and eigenvetors


=⇒ T = QT SQ
T
Now, T T = QT S T QT = QT SQ = T is symmetric.

Thus, T has to be diagonal (because T is triangular)


This proves, S = QΛQT

Positive definite matrix


• A matrix A is said to be positive deinite if xT Ax > 0 for any nonzero ’x’.
• All the eigen values of A are positive.

λi > 0

• A symmetric matrix is said to be positive definite if xT Sx > 0 for all non-zero x.


– all eigen values of S are positive.
• If xT Sx > 0 −→ then ‘S’ is positive semi-definite, and
if xT Sx < 0 −→ ‘S’ is termed as indefinite or negative definite.
• Now S = AT A, is positive definite provided ‘A’ has independent columns.

? To check whether all the eigenvalue are positive or not one can easily look the sign of pivots
(applicable only for symmetric matrix) →
 
2 2 1
 
2 2 1
0 1 0 
S= 2 3 1
  −→  
1 1 2 0 0 3/2

13
? All pivots are positive ⇒ hence, eigenvalues are also positive.

? Hence, S is a positive definite matrix.

? In many application, the requirement of positive energy given rise to the other definition of
positive definite matrix.
Example:

1 T
2
x Kx = strain energy stored in deformed body and always positive for any non-zero dis-
placement. Hence, K is positive definite, if x represent deformation and does not incorporate
rigid body motion. Otherwise it is positive semi-definite.

However, Kinetic energy = 12 v T M v > 0, for non-zero velocity v. Hence, M is always positive
definite.

? If S and T are symmetric and positive definite, then (S + T ) is also symmetric positive
definite.

– xT (S + T )x = xT Sx + xT T x > 0

? If the column of A are linearly independent, then prove S = AT A

– S T = AT A = S −→ symmetric
– xT (AT A)x = (Ax)T (Ax) = k Ax k2 > 0

? xT Sx = 1 is an equation of an ellipse of which the major and minor axis are denoteed by the
eigen vector of ‘S’.

– Now, if the coordinate system are rotated by Q,

xT Sx = xT (QΛQT )x = (xT Q)Λ(QT x)


= (QT x)T Λ(QT x)
= v T Λv

– in the rotated system the principle axes of the ellipse are oriented along coordinate di-
rection, (v).

Example:
 
5 4
S= ⇒ xT Sx = 1
4 5
 
  5x + 4y
⇒ x y =1
4x + 5y
⇒ 5x2 + 5y 2 + 8xy = 1

14
λ1 + λ2 = 10, λ1 λ2 = 9 =⇒
   
1 1 1
λ1 = 1 −→ v1 = −→ q1 = √
−1 2 −1
   
1 1 1
λ2 = 9 −→ v2 = −→ q2 = √
1 2 1

   
1 1 1
1 0 1 −1
S=√ √
2 × 2 −1 1
0 9 1 1
      
1 1 1 1 −1 1 10 8 5 4
⇒S= = =
2 −1 1 9 9 2 8 10 4 5

Now,
  
T 1 1 −1 x
Q x= √
2 1 1 y
 
1 x−y
=√
2 x+y

 T   
T T T 1 x−y 1 x−y
(Q x) Λ(Q x) = √ √
2× 2 x+y 9 x+y
1
= {(x − y)2 + 9(x + y)2 }
2
(  2  2 )
x−y x+y
= √ +9 √
2 2
= (X 2 + 3Y 2 )

∗ With help of Q one can achieve the princi-


ple axis → that is why it is also named as
principle axis theorem.

15

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy