0% found this document useful (0 votes)
30 views12 pages

ALL IMPORTANT FORMULA of BSM 101

Uploaded by

pramita dutta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views12 pages

ALL IMPORTANT FORMULA of BSM 101

Uploaded by

pramita dutta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Exclusive PYQ Courses: Click here!

ALL IMPORTANT FORMULA FOR BSM 101


1. Calculus (differentiation):
a) Mean Value Theorems:
Rolle’s Theorem: Statement: let f be a function defined on a finite closed interval [a, b] such that
(i) f(x) is continuous for all x in 𝑎 ≤ 𝑥 ≤ 𝑏.
(ii) f’(x) exists for all x in 𝑎 < 𝑥 < 𝑏.
(iii) 𝑓(𝑎) = 𝑓(𝑏).
Then there exists at least one value c, in 𝑎 < 𝑐 < 𝑏, such that f'(c) = 0

Lagrange’s theorem: Statement: Let f be a function defined on a finite closed interval [a, b] such that
(i) f(x) is continuous for all x, 𝑎 ≤ 𝑥 ≤ 𝑏
(ii) f’(x) exists for all x, a<x<b.
!(#)%!(&)
Then there exists at least one value c, a <c <b, such that #%&
= 𝑓 ' (𝑐).
𝞱 form of Lagrange’s Mean Value Theorem:
Let f be a function defined on a finite closed interval [a, a +h] such that
(i) f is continuous on [𝑎, 𝑎 + ℎ].
(ii) f is derivable on (𝑎, 𝑎 + ℎ).
Then 𝑓(𝑎 + ℎ) = 𝑓(𝑎) + ℎ 𝑓 ' (𝑎 + 𝜃ℎ), 0 < 𝜃 < 1 … (I).
Note: For the interval [0, 𝑥], equation (I) reduces to
𝑓(𝑥) = 𝑓(0) + ℎ 𝑓 ' (𝜃𝑥), 0 < 𝜃 < 1 … (II).
This form is known as Maclaurin's Formula.

Cauchy’s Mean Value Theorem: Let f & g be a function defined on a finite closed interval [a, b] such that
(i) f(x) & g(x) is continuous for all x, 𝑎 ≤ 𝑥 ≤ 𝑏
(ii) f’(x) & g’(x) exists for all x, 𝑎 < 𝑥 < 𝑏.
(iii) 𝑔’(𝑥) ≠ 0 , 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑥𝜖(𝑎, 𝑏).
!(#)%!(&) ! ! ())
Then there exists at least one value c, 𝑎 < 𝑐 < 𝑏, such that ((#)%((&) = (! ()).

Generalized Mean Value Theorem: Taylor’s Theorem:


Statement (Lagrange’s form of remainder): Let f be a function defined on a finite closed interval [a, b] such
that
(i) 𝑓 (*%+) is continuous on [𝑎, 𝑏]
(ii) 𝑓 (*) exists on (𝑎, 𝑏)
Then there exists at least one value c, a <c <b, such that
+ + +
𝑓(𝑏) = 𝑓(𝑎) + (𝑏 − 𝑎)𝑓 ' (𝑎) + ,! (𝑏 − 𝑎), 𝑓 '' (𝑎) + ⋯ + (*%+)! (𝑏 − 𝑎)*%+ 𝑓 (*%+)(𝑎) + *! (𝑏 − 𝑎)* 𝑓 (*) (𝑐).

𝞱 form: Statement: Let f be a function defined on a finite closed interval [a, a+ h] , h> 0 such that
(i) 𝑓 (*%+) is continuous on [𝑎, 𝑎 + ℎ]
(ii) 𝑓 (*) exists on (𝑎, 𝑎 + ℎ)
Then there exists at least one value 𝜃, 0 < 𝜃 < 1, such that
' (𝑎)
ℎ, '' ℎ*%+ (*%+)
ℎ* (*)
𝑓(𝑎 + ℎ) = 𝑓(𝑎) + ℎ𝑓 + 𝑓 (𝑎) + ⋯+ 𝑓 (𝑎) + 𝑓 (𝑎 + 𝜃ℎ).
2! (𝑛 − 1)! 𝑛!

Tending to Infinity 1
Exclusive PYQ Courses: Click here!

IMPORTANT NOTE: 𝑷𝒖𝒕 𝒃 = 𝒙 𝒐𝒓 𝒂 + 𝒉 = 𝒙


Statement (Cauchy’s form of remainder): Let f be a function defined on a finite closed interval [a, b] such
that
(i) 𝑓 (*%+) is continuous on [𝑎, 𝑏]
(ii) 𝑓 (*) exists on (𝑎, 𝑏)
Then there exists at least one value c, a <c <b, such that
1 1
𝑓(𝑏) = 𝑓(𝑎) + (𝑏 − 𝑎)𝑓 ' (𝑎) + (𝑏 − 𝑎), 𝑓 '' (𝑎) + ⋯ + (𝑏 − 𝑎)*%+ 𝑓 (*%+) (𝑎)
2! (𝑛 − 1)!
1
+ (𝑏 − 𝑎)(𝑏 − 𝑐)*%+ 𝑓 (*) (𝑐).
(𝑛 − 1)!

𝞱 form: Statement: Let f be a function defined on a finite closed interval [a, a+ h] , h> 0 such that
(i) 𝑓 (*%+) is continuous on [𝑎, 𝑎 + ℎ]
(ii) 𝑓 (*) exists on (𝑎, 𝑎 + ℎ)
Then there exists at least one value 𝜃, 0 < 𝜃 < 1, such that
ℎ, ℎ*%+ ℎ* (1 − 𝜃)*%+ (*)
𝑓(𝑎 + ℎ) = 𝑓(𝑎) + ℎ𝑓 ' (𝑎) + 𝑓 '' (𝑎) + ⋯ + 𝑓 (*%+)(𝑎) + 𝑓 (𝑎 + 𝜃ℎ).
2! (𝑛 − 1)! (𝑛 − 1)!
IMPORTANT NOTE: 𝑷𝒖𝒕 𝒃 = 𝒙 𝒐𝒓 𝒂 + 𝒉 = 𝒙

Maclaurin’s Theorem: Statement (Lagrange’s form of remainder): (Taking a=0 and h=x in Taylor’s)
Let f be a function defined on a finite closed interval [0, x] , x> 0 such that
(i) 𝑓 (*%+) is continuous on [0, 𝑥]
(ii) 𝑓 (*) exists on (0, 𝑥)
Then there exists at least one value 𝜃, 0 < 𝜃 < 1, such that
𝑥, 𝑥 *%+ 𝑥*
𝑓(𝑥) = 𝑓(0) + 𝑥𝑓 ' (0) + 𝑓 '' (0) + ⋯ + 𝑓 (*%+) (0) + 𝑓 (*) (𝜃𝑥).
2! (𝑛 − 1)! 𝑛!
Maclaurin’s Theorem: Statement (Cauchy’s form of remainder): (Taking a=0 and h=x in Taylor’s)
Let f be a function defined on a finite closed interval [0, x] , x> 0 such that
(i) 𝑓 (*%+) is continuous on [0, 𝑥]
(ii) 𝑓 (*) exists on (0, 𝑥)
Then there exists at least one value 𝜃, 0 < 𝜃 < 1, such that
𝑥, 𝑥 *%+ 𝑥 * (1 − 𝜃)*%+ (*)
𝑓(𝑥) = 𝑓(0) + 𝑥𝑓 ' (0) + 𝑓 '' (0) + ⋯ + 𝑓 (*%+) (0) + 𝑓 (𝜃𝑥).
2! (𝑛 − 1)! (𝑛 − 1)!

b) Maxima & Minima: Necessary condition for maximum or minimum:


If a function f(x) has maximum or minimum at point x=c and if f'(c) exists then f'(c) =0
Evaluation of Maxima and Minima: Theorem 1. Let f(x) be a function defined on an interval [a, b] and let
be a point in this interval. If 𝑓′(𝑐) = 0 and 𝑓"(𝑐) ≠ 0 then f(x) has a
(i) 𝑚𝑎𝑥𝑖𝑚𝑢𝑚 𝑎𝑡 𝑥 = 𝑐 𝑖𝑓 𝑓"(𝑐) < 0
(ii) 𝑚𝑖𝑛𝑖𝑚𝑢𝑚 𝑎𝑡 𝑥 = 𝑐 𝑖𝑓 𝑓"(𝑐) > 0
Theorem 2. Let f(x) be a function defined on an interval [a, b]. c be a point in this interval.
If 𝑓 ' (𝑐) = 𝑓 '' (𝑐) = 𝑓 ''' (𝑐) = ⋯ = 𝑓 (*%+) (𝑐) = 0 𝑎𝑛𝑑 𝑓 (*) (𝑐) ≠ 0, Then,
(i) for n odd, f(c) is neither maximum nor minimum.
Tending to Infinity 2
Exclusive PYQ Courses: Click here!
(ii) for n even, f(c) is maximum if 𝑓 (*) (𝑐) < 0 and minimum if 𝑓 (*) (𝑐) > 0.

c) Indeterminate Form (L’Hospital’s rules):


Generally, a function is said to assume indeterminate form when it takes up any one of the following forms:
𝟎 /
(A) (B) (C) ∞ − ∞ (D) 0.∞ (E) 00 , 10 , ∞0 , 1/ etc.
𝟎 /
L’Hospital’s Theorem:
If two functions ƒ and g are
(i) Continuous in the closed interval [a, b]
(ii) Derivable in the open interval (a, b)
(iii) lim 𝑓(𝑥) = 0 = lim 𝑔(𝑥)
1→& 1→&

!(1) !'(1)
Then lim ((1) = lim ('(1)
1→& 1→&

2. Calculus (Integration):
a) Evolutes & Involutes:
Cartesian form: If y=f(x) be the curve then Radius of Curvature at the given point (a, b) is
$
3+45"# 6# 75 7#5
𝜌=Z 5#
Z , (𝑦, ≠ 0), 𝑤ℎ𝑒𝑟𝑒 𝑦+ = 71 𝑎𝑡 (𝑎, 𝑏), 𝑦, = 71 # 𝑎𝑡(𝑎, 𝑏).

Polar Form: If 𝑟 = 𝑓(𝜃) be the curve then the Radius of Curvature at the given point is
$
38 # 48"# 6# 78 7#8
𝜌 = Z8 # 4,8 # %88 Z , 𝑤ℎ𝑒𝑟𝑒 𝑟 , 𝑟+ = 79 , 𝑟, = 79# 𝑎𝑡 𝑡ℎ𝑒 𝑔𝑖𝑣𝑒𝑛 𝑝𝑜𝑖𝑛𝑡.
" #

`, 𝒚
If C (𝒙 `) be the coordinates of the Center of Curvature of Γ at P then
5" (+45"# ) (+45"# ) 75 7#5
𝑥̅ = 𝑥 − 5#
, 𝑦d = 𝑦 + 5#
, (𝑦, ≠ 0), 𝑤ℎ𝑒𝑟𝑒 𝑦+ = 71 𝑎𝑡 𝑃 , 𝑦, = 71 # 𝑎𝑡 𝑃.
Let Γ be the curve and C (𝒙`, 𝒚
`) be Center of Curvature of Γ at P. The locus of C as the point P travels on Γ
is called the Evolute of the curve Γ. If the curve Γ1 be the evolute of Γ then Γ is called the Involute of the
curve Γ1
To find equation of evolute make a relation between 𝒙 `&𝒚 ` and then in that relation replace 𝒙 ` by x & 𝒚`
by y.

b) Improper Integral:
Improper Integrals of First K𝐢𝐧𝐝:
/ # /
(A) Type 1: ∫& 𝑓(𝑥)𝑑𝑥 . (B) Type 2: ∫%/ 𝑓(𝑥)𝑑𝑥 . (C) Type 3: ∫%/ 𝑓(𝑥)𝑑𝑥 .
Improper Integrals of Second K𝐢𝐧𝐝:
#
Type 1:∫& 𝑓(𝑥)𝑑𝑥 , Only point of infinite discontinuity is at 𝑥 = 𝑎
#
Type 2:∫& 𝑓(𝑥)𝑑𝑥 , Only point of infinite discontinuity is at 𝑥 = 𝑏
#
Type 3:∫& 𝑓(𝑥)𝑑𝑥 , both the end points a and b be the only points of infinite discontinuity of the function f(x)
#
Type 4:∫& 𝑓(𝑥)𝑑𝑥 , Let c be the only point of infinite discontinuity of the function. 𝑎 < 𝑐 < 𝑏
Some Standard Improper Integrals:
/ 71
(A) C𝐨𝐧𝐯𝐞𝐫𝐠𝐞𝐧𝐜𝐞 𝐨𝐟 ∫& 1%
, (𝑎 > 0): 𝐶𝑜𝑛𝑣𝑒𝑟𝑔𝑒𝑛𝑡 𝑖𝑓 𝑛 > 1 𝑎𝑛𝑑 𝑑𝑖𝑣𝑒𝑟𝑔𝑒𝑛𝑡 𝑖𝑓 𝑛 ≤ 1

Tending to Infinity 3
Exclusive PYQ Courses: Click here!
# 71
(B) C𝐨𝐧𝐯𝐞𝐫𝐠𝐞𝐧𝐜𝐞 𝐨𝐟 ∫& (1%&)%
, (𝑛 > 0) ∶ 𝐶𝑜𝑛𝑣𝑒𝑟𝑔𝑒𝑛𝑡 𝑖𝑓 𝑛 < 1 𝑎𝑛𝑑 𝑑𝑖𝑣𝑒𝑟𝑔𝑒𝑛𝑡 𝑖𝑓 𝑛 ≥ 1
# 71
(C) C𝐨𝐧𝐯𝐞𝐫𝐠𝐞𝐧𝐜𝐞 𝐨𝐟 ∫& (#%1)% , (𝑛 > 0).: 𝐶𝑜𝑛𝑣𝑒𝑟𝑔𝑒𝑛𝑡 𝑖𝑓 𝑛 < 1 𝑎𝑛𝑑 𝑑𝑖𝑣𝑒𝑟𝑔𝑒𝑛𝑡 𝑖𝑓 𝑛 ≥ 1

c) Beta & Gamma Functions:


/
Definition of Gamma Function: 𝛤(𝑛)= ∫0 𝑒 %1 𝑥 *%+ 𝑑𝑥 , (𝑛 > 0).
Properties of Gamma Function:
i) Г(1) = 1
ii) 𝛤(𝑛 + 1) = 𝑛𝛤(𝑛)
iii) 𝛤(𝑛 + 1) = 𝑛! , 𝑤ℎ𝑒𝑛 𝑛 𝑖𝑠 𝑎 𝑝𝑜𝑠𝑡𝑖𝑣𝑒 𝑖𝑛𝑡𝑒𝑔𝑒𝑟.
;
iv) 𝛤(𝑛)𝛤(1 − 𝑛) = <=**; , 0 < 𝑛 < 1 (𝐷𝑢𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛 𝑓𝑜𝑟𝑚𝑢𝑙𝑎)
+
v) 𝛤 •,€ = √𝜋
+
Definition of Gamma Function: 𝐵(𝑚, 𝑛)= ∫0 𝑥 >%+ (1 − 𝑥)*%+ 𝑑𝑥 , (𝑚, 𝑛 > 0)
Properties of Beta Function:
(i) 𝐵(𝑚, 𝑛) = 𝐵(𝑛, 𝑚)
(>%+)!
(ii) 𝐵(𝑚, 𝑛) = *(*4+)(*4,)…(*4>%+) , 𝑒𝑖𝑡ℎ𝑒𝑟 𝑜𝑓 𝑚 𝑜𝑟 𝑛 𝑚𝑢𝑠𝑡 𝑏𝑒 𝑖𝑛𝑡𝑒𝑔𝑒𝑟.
&
(iii) 𝐵(𝑚, 𝑛)=2 ∫0 𝑠𝑖𝑛,>%+ 𝜃𝑐𝑜𝑠 ,*%+ 𝜃𝑑𝜃.
#

/ 1 '("
(iv) 𝐵(𝑚, 𝑛)= ∫0 (+41)')%
𝑑𝑥 .
+ +
(v) 𝐵 •, , ,€ =𝜋
@(>)@(*)
Relation between Beta and Gamma function: 𝐵(𝑚, 𝑛)= @(>4*)
.
d) Surface Area and Volume of Revolution:
Volume of Revolution, the axis of revolution being X axis: If the region R enclosed by the curve of a
continuous function y=f(x), as 𝑎 ≤ 𝑥 ≤ 𝑏, the ordinates x=a, x=b and the x-axis be rotated about X-axis then
the volume of revolution
𝒃 𝒃
𝑽 = 𝝅 † 𝒚𝟐 𝒅𝒙 = 𝝅 † {𝒇(𝒙)}𝟐 𝒅𝒙
𝒂 𝒂
Surface of Revolution, the axis of revolution being X axis: If the region R enclosed by the curve of a
continuous function y=f(x), as 𝑎 ≤ 𝑥 ≤ 𝑏, the ordinates x=a, x=b and the x-axis be rotated about X-axis then
the surface of revolution
𝒃
𝒅𝒚 𝟐 𝒃
𝟐
𝑺 = 𝟐𝝅 † 𝒚•𝟏 + • • 𝒅𝒙 = 𝟐𝝅 † 𝒇(𝒙)‘𝟏 + ’𝒇' (𝒙)“ 𝒅𝒙
𝒂 𝒅𝒙 𝒂
Volume of Revolution, the axis of revolution being Y axis: If the region R enclosed by the curve of a
continuous function x = 𝝓(𝒚), as 𝑐 ≤ 𝑥 ≤ 𝑑, the ordinates y=c, y=d and the Y-axis be rotated about Y-axis
then the volume of revolution
𝒅 𝒅
= 𝝅 † 𝒙𝟐 𝒅𝒚 = 𝝅 † {𝝓(𝒚)}𝟐 𝒅𝒚
𝒄 𝒄
Surface of Revolution, the axis of revolution being Y axis: If the region R enclosed by the curve of a
continuous function x = 𝝓(𝒚), as 𝑐 ≤ 𝑥 ≤ 𝑑, the ordinates y=c, y=d and the Y-axis be rotated about Y-axis
then the surface of revolution

Tending to Infinity 4
Exclusive PYQ Courses: Click here!
𝒅
𝒅𝒙 𝟐 𝒅
𝟐
= 𝟐𝝅 † 𝒙•𝟏 + • • 𝒅𝒚 = 𝟐𝝅 † 𝝓(𝒚)‘𝟏 + ’𝝓' (𝒚)“ 𝒅𝒚
𝒄 𝒅𝒚 𝒄
Volume of Revolution, the axis of revolution being X axis: If the equation of the curve be in parametric
form as x=f(t), y=g(t), t being the parameter, then the volume(V) of the solid of revolution is given by
𝒕𝟐
𝑽 = 𝝅 † {𝒈(𝒕)}𝟐 𝒇′(𝒕)𝒅𝒕
𝒕𝟏
Where t1, t2 are the values of t corresponding to x=a, y=b respectively.
Surface of Revolution, the axis of revolution being X axis: If the equation of the curve be in parametric
form as x=f(t), y=g(t), t being the parameter, then the Surface area (S) of the solid of revolution is given by
𝒕𝟐
𝒅𝒙 𝟐 𝒅𝒚 𝟐

𝑺 = 𝟐𝝅 † 𝒚 • • + • • 𝒅𝒕
𝒕𝟏 𝒅𝒕 𝒅𝒕
Where t1, t2 are the values of t corresponding to x=a, y=b respectively.
Volume and Surface of revolution if the curve revolved be given by its Polar equation and the polar axis
is Axis of Revolution: Let AB is the curve where the vectorial angles of A and B are respectively 𝜽𝟏 and 𝜽𝟐.
If BA is revolved about hte polar axis OX then
𝜽𝟐
Volume of Revolution = 𝝅 ∫𝜽𝟏 𝒓𝟐 𝒔𝒊𝒏𝟐 𝜽𝒅(𝒓𝒄𝒐𝒔𝜽)
𝜽𝟐
Surface of Revolution = 𝟐𝝅 ∫𝜽𝟏 𝒓𝒔𝒊𝒏𝜽œ(𝒅𝒓)𝟐 + (𝒓)𝟐 𝒅𝜽

3. Matrix & Determinants:


Different types of Matrices:
(a) RECTANGULAR MATRIX: A matrix A is said to be rectangular if the number of rows is not equal to
the number of columns , i.e., if 𝑚 ≠ 𝑛.
(b) SQUARE MATRIX: A matrix A is said to be a square matrix if the number of rows and columns are
equal. i.e., if 𝑚 = 𝑛.
(c) ROW MATRIX OR ROW VECTOR: A matrix of n elements arranged in one row only is called a row
matrix or a row vector, i.e. it is an 1 × n matrix.
(d) COLUMN MATRIX OR COLUMN VECTOR: A matrix of n elements arranged in one column only is
called a column matrix or a column vector, i.e. it is an m × 1 matrix.
(e) NULL MATRIX OR ZERO MATRIX: All the elements of the matrix is 0.
(f) DIAGONAL MATRIX: A square matrix in which all off-diagonal elements are zero is called a diagonal
matrix.
(g) UNIT MATRIX OR IDENTITY MATRIX: The Diagonal elements are all 1 and all the off-diagonal
elements are all 0.
(h) Orthogonal Matrix: A square matrix A is said to be an Orthogonal if 𝐴𝐴I = 𝐴I 𝐴 = 𝐼 , where I is the
identity matrix of same order.
Properties:
I. If A and B are orthogonal matrices of the same order then prove that AB is orthogonal.
II. Every orthogonal matrix A is non- singular and det(A) = ±1
III. If A and Bare orthogonal matrices and de(A) + de(B) = 0, then prove that A + B is singular.
IV. Prove that if A is an orthogonal matrix, then AT and A-1 are also orthogonal.
V. If A is skew symmetric matrix, then prove that (I-A) (I+A)-1 is orthogonal matrix
(i) Idempotent matrix: A square matrix A is said to be an idempotent matrix if 𝐴, = 𝐴.

Tending to Infinity 5
Exclusive PYQ Courses: Click here!
(j) NILPOTENT MATRIX: A square matrix Ais said to be a nilpotent matrix with index k, if k be the least
positive integer for which 𝐴J = 𝑂, a null (or zero) matrix.
(k) INVOLUTARY MATRIX: A square matrix A is said to be an involutory matrix if 𝐴, = 𝐼.

Note: MATRIX MULTIPLICATION IS NON-COMMUTATIVE.


If A, B are two matrices, even if AB and BA are defined, 𝐴𝐵 ≠ 𝐵𝐴, in general.
Properties of TRANSPOSE OF A MATRIX:
If A and B are two matrices, then
(i) (𝑘𝐴)I = 𝑘𝐴I , where k is a scalar (or number)
(ii) (𝐴I )I = 𝐴
(iii) (𝐴 + 𝐵)I = 𝐴I + 𝐵I , 𝑝𝑟𝑜𝑣𝑖𝑑𝑒𝑑 𝐴 + 𝐵 𝑖𝑠 𝑑𝑒𝑓𝑖𝑛𝑒𝑑
(iv) (𝐴 − 𝐵)I = 𝐴I – 𝐵I , provided A- B is defined
(v) (𝐴𝐵)I = 𝐵I 𝐴I , provided AB is defined
SYMMETRIC AND SKEW-SYMMETRIC MATRICES:
(i) A square matrix A is said to be symmetric if 𝐴I = 𝐴 .
(ii) A square matrix A is said to be skew-symmetric if 𝐴I = −𝐴
PROPERTIES:
(a) If A, B are symmetric matrices of same order, then A+B is also symmetric.
(b) The product of two symmetric matrices A, B of same order is symmetric if and only if AB = BA.
(c) If A be a square matrix, then 𝐴 + 𝐴I is symmetric and 𝐴 − 𝐴I is skew-symmetric.
(d) Any square matrix can be expressed uniquely as a sum of a symmetric matrix and a skew-symmetric
matrix.
(e) If A is a skew-symmetric matrix, then 𝐴, is symmetric.

Singular & Non-Singular Matrix: A square matrix A is said to singular its determinant i.e. det(A) = |A|
= 0, otherwise it is called Non-Singular Matrix.
Inverse of A Matrix: Let A be a square matrix. Another matrix B of same size is said to the Inverse of A
if 𝑨𝑩 = 𝑩𝑨 = 𝑰 , where I is the identity Matrix of same size. If B is inverse of A then it is denoted by
𝑨%𝟏 . Thus, 𝑨𝑨%𝟏 = 𝑨%𝟏 𝑨 = 𝑰
𝟏
Theorem: A non-singular square matrix 𝐴(𝑖. 𝑒. det(𝐴) ≠ 0) is invertible and 𝑨%𝟏 = 𝒅𝒆𝒕(𝑨) 𝑨𝒅𝒋(𝑨)
Properties:
1. The inverse of any matrix is unique.
2. For any invertible matrix A (𝐴%+ )%+ = 𝐴.
3. If A and B are invertible then (𝐴𝐵)%+ = 𝐵%+ 𝐴%+
4. For any invertible matrix A (𝐴%+ )I = (𝐴I )%+

Determinants:
Properties of Determinant:
Property 1. The value of a determinant is unaltered if the determinant is transposed, i.e. if rows and columns
are interchanged.
Property 2. The value of a determinant is unaltered but the sign is altered if two adjacent rows / columns are
interchanged.
Property 3. If two rows /columns of a determinant are identical then the value of the determinant is 0.

Tending to Infinity 6
Exclusive PYQ Courses: Click here!
Property 4. If all the elements of one row/column are multiplied by a number then the value of the
determinant is multiplied by that number
Property 5. If each element of a row / column is expressed as the sum of two numbers then the determinant
can be expressed as sum of two determinants
Property 6. The value of a determinant is not altered by adding to the elements of any row / column the same
multiple of the corresponding elements of any other row / column.

Cofactor of an element in a Determinant: If 𝑎𝑟𝑠 is the element in a determinant lying in rth row and sth
column, then cofactor of 𝒂𝒓𝒔 = (−1)84< × 𝑚𝑖𝑛𝑜𝑟 𝑜𝑓 𝑎𝑟𝑠 in the determinant and is denoted by Ars.
Minor of an element in a Determinant: The sub-determinant of a determinant |A| obtained by deleting the
r-th row and s-th column is called the Minor of the element ars in |A|, where ars stands for the element
belonging to r-th row and s-th column in |A|.
Jacobi's Theorem (for 3rd order determinant).
If D be a 3rd order non-zero determinant, then 𝐷′ = 𝐷2, where D' is the adjugate/adjoint determinant of D.

Laplace method of expansion of Determinant: In an nth order determinant, 𝐷 = |𝑎=M |*×* , if any r rows be
selected, D can be expressed as the ums of the products of al minors of order r formed from shoes r number of
rows and their respective algebraic complements.
Cramer’s Rule for Solution of Linear Equations:

Let, be a system of three linear equations with the three unknowns x, y and z.

If the co-efficient determinant

𝑫𝟏 𝑫𝟐 𝑫𝟑
Then, 𝒙= 𝑫
, 𝒚= 𝑫
, 𝒛= 𝑫
, where,

Corollary:
(1) If in the system 𝑑1 = 𝑑2 = 𝑑3 = 0 𝑎𝑛𝑑 𝐷 ≠ 0 𝑡ℎ𝑒𝑛 𝑥 = 𝑦 = 𝑧 = 0.
(2) If in the system 𝐷 = 0 𝑎𝑛𝑑 𝑎𝑡 𝑙𝑒𝑎𝑠𝑡 𝑜𝑛𝑒 𝑜𝑓 𝐷1, 𝐷2 𝑎𝑛𝑑 𝐷3 𝑖𝑠 ≠ 0, the system has certainly, no solution.
(3) If 𝐷 = 0 𝑎𝑛𝑑 𝐷1 = 𝐷2 = 𝐷3 = 0 also then the system has infinite number of solutions.
RANK OF A MATRIX:
Rank of a matrix A is the positive integer r such that
(i) there exists at least one rth order non-singular square submatrix of A.
(ii) all the square submatrices of A of order greater than r are singular.
(iii) Number of non-zero rows in the Row Reduced Echelon Form.
Rank of matrix A is denoted by rank (A) / R(A) / 𝝆(𝑨).
ELEMENTARY ROW OPERATIONS ON A MATRIX:
An elementary row operation on a matrix A is any one operation of the following three types:
(i) Interchange of any two rows of A. Interchange of ith and jth rows of A is denoted by Rij.
(ii) Multiplication of a row by a non-zero scalar k. Multiplication of ith row of A by a non-zero scalar k is
denoted by kRi.

Tending to Infinity 7
Exclusive PYQ Courses: Click here!
(iii) Addition of a scalar multiple of one row to another row. Addition of k times the elements of jth row to
the corresponding elements of ith row is denoted by Ri +kRj
ECHELON MATRIX:
A matrix A is said to be an Echelon matrix or is said to be in echelon form if
(i) All zero-rows of A follow all non-zero rows.
(ii) The number of zeros preceding the first non-zero element of a row increases as we pas from row to
row downwards.
Theorem 1. Every matrix can be made row equivalent to an Echelon matrix.
Theorem 2. Elementary operations (row and/or column) do not alter the rank of a matrix.
Theorem 3. If an Echelon matrix has r number of non-zero rows, then the rank of this matrix is r.
Properties of Rank:
1. Ranks of A and AT are same.
2. Rank of a null matrix is zero.
3. For a matrix of order 𝑚 × 𝑛, 𝑟𝑎𝑛𝑘 (𝐴) ≤ 𝑚𝑖𝑛 (𝑚, 𝑛).
4. For an nth order square matrix A, 𝑖𝑓 𝑟𝑎𝑛𝑘 (𝐴) = 𝑛, 𝑡ℎ𝑒𝑛 | 𝐴| ≠ 0, 𝑖. 𝑒. , 𝐴 𝑖𝑠 𝑛𝑜𝑛 − 𝑠𝑖𝑛𝑔𝑢𝑙𝑎𝑟,
5. For any square matrix A of order n, 𝑖𝑓 𝑟𝑎𝑛𝑘 (𝐴) < 𝑛, 𝑡ℎ𝑒𝑛 | 𝐴| = 0, 𝑖. 𝑒. , 𝐴 𝑖𝑠 𝑠𝑖𝑛𝑔𝑢𝑙𝑎𝑟.
Rank-nullity theorem: For a 𝑚 × 𝑛 matrix A, 𝑹𝒂𝒏𝒌(𝑨) + 𝑵𝒖𝒍𝒍𝒊𝒕𝒚(𝑨) = 𝒏
CONSISTENCY AND INCONSISTENCY OF SYSTEM OF LINEAR EQUATION:
I. The system is inconsistent i.e. has no solution if and only if 𝑹𝒂𝒏𝒌 (𝑨) ≠ 𝑹𝒂𝒏𝒌 ([𝑨: 𝒃])
II. The system is consistent i.e. has solution if and only if 𝑹𝒂𝒏𝒌 (𝑨) = 𝑹𝒂𝒏𝒌 ([𝑨: 𝒃]).
(a) The system has unique (exactly one) solution if 𝑹𝒂𝒏𝒌(𝑨) = 𝑹𝒂𝒏𝒌([𝑨: 𝒃]) =
𝒏 (𝒏𝒐. 𝒐𝒇 𝒗𝒂𝒓𝒊𝒂𝒃𝒍𝒆𝒔).
(b) The system has many (more than one) solutions if 𝑹𝒂𝒏𝒌 (𝑨) = 𝑹𝒂𝒏𝒌 ([𝑨: 𝒃]) <
𝒏(𝒏𝒐. 𝒐𝒇 𝒗𝒂𝒓𝒊𝒂𝒃𝒍𝒆𝒔).
Where, [A:b] = The Augmented Matrix
This method is also called Gauss Jordan Elimination Method or simply Gauss Elimination Method.

Inconsistent i.e. has


Consistent i.e. has no solution
solution
Condition:
Condition: 𝑹𝒂𝒏𝒌 𝑨 = 𝑹𝒂𝒏𝒌 𝑨 ≠
𝑹𝒂𝒏𝒌 𝑨: 𝒃
𝑹𝒂𝒏𝒌 ([𝑨: 𝒃])

unique (exactly one) solution many (more than one) solutions


Condition: 𝑹𝒂𝒏𝒌 𝑨 = Condition: 𝑹𝒂𝒏𝒌 𝑨 =
𝑹𝒂𝒏𝒌 𝑨: 𝒃 = 𝑹𝒂𝒏𝒌 𝑨: 𝒃 <
𝒏 (𝒏𝒐. 𝒐𝒇 𝒗𝒂𝒓𝒊𝒂𝒃𝒍𝒆𝒔). 𝒏(𝒏𝒐. 𝒐𝒇 𝒗𝒂𝒓𝒊𝒂𝒃𝒍𝒆𝒔)

Tending to Infinity 8
Exclusive PYQ Courses: Click here!
4. Vector Space:
Definition of Vector Space:
Let V be a non-empty set and ⨁ be a composition in V i.e. ⨁ composes two elements of V; R be the field of
real numbers.
Let ⨀ be a composition which composes the elements of R with the elements of V. (i.e. ⨀ is a mapping
whose domain is 𝑅 × 𝑉).
V is said to be a vector space over the field R, if the following axioms are satisfied: ⨂
I. V is abelian group with respect to the composition ⨁, that is
(i) 𝛼 ⨁ 𝛽 ∈ 𝑉 𝑓𝑜𝑟 𝑎𝑙𝑙 𝛼 𝑎𝑛𝑑 𝛽 𝑖𝑛 𝑉. [closure property under ⨁]
(ii) 𝛼 ⨁(𝛽 ⨁ 𝛾) = (𝛼 ⨁ 𝛽)⨁𝛾 𝑓𝑜𝑟 𝑎𝑙𝑙 𝛼, 𝛽, 𝛾 𝑖𝑛 𝑉 [Associative property under ⨁]
(iii) V contains an element, say 𝜃, such that a 𝛼 ⨁ 𝜃 = 𝛼 for all 𝛼 in V.
(iv) Corresponding to each element 𝛼 in V, there exists an element, say −𝛼 in V such that 𝛼⨁(−𝛼) = 𝜃
(v) (𝛼 ⨁ 𝛽) = (𝛽 ⨁ 𝛼) for all 𝛼,ß in V [commutative property under ⨁]
II.
(i) c⨀𝛼 ∈ 𝑉 for all c in R and for all 𝛼 in V.
(ii) 1⨀𝛼 = 𝛼 for all 𝛼 in V.
(iii) (𝑐. 𝑑)⨀𝛼 = 𝑐⨀(𝑑⨀𝛼) for all c, d in R and for all 𝛼 in V.
(iv) 𝑐⨀(𝛼 ⨁ 𝛽) = (𝑐⨀𝛼)⨁(𝑐⨀𝛽) for all c in R and for all 𝛼,ß in V.
(v) (𝑐 + 𝑑)⨀𝛼 = (𝑐⨀𝑎)⨁(𝑑⨀𝛼) for all c, d in R and for all 𝛼 in Y.

Subspace: Let V be a vector space of the field R. A subset S of V is called subspace of V if S itself is a vector
space over the same field R under the same composition of V.
Theorem: (A criterion for subspace)
A non-null subset S of a vector space V is subspace, if and only if
(i) 𝛼 + 𝛽 ∈ 𝑆 𝑓𝑜𝑟 𝑎𝑙𝑙 𝛼 𝑎𝑛𝑑 𝛽 𝑖𝑛 𝑆.
(ii) c𝛼 ∈ 𝑆 for all c in R and for all 𝛼 in S.
Theorem: Intersection of two subspaces is a subspace.

Linear Combination of Vectors: Let 𝛼+ , 𝛼, , … , 𝛼8 be r number of vectors in a vector space V over R. Then
𝒓

𝒄𝟏 𝜶𝟏 + 𝒄𝟐 𝜶𝟐 + ⋯ + 𝒄𝒓 𝜶𝒓 = À 𝒄𝒊𝜶𝒊
𝒊S𝟏
is called a linear combination of the vectors 𝛼+ , 𝛼, , … , 𝛼8 where c1, c2, … cr belong to R.
Linear Dependence and Independence of Vectors:
Let V be a vector space over R. The set of vectors {𝛼+ , 𝛼, , … , 𝛼8 } are called linearly dependent (or simply
dependent), if it is possible to get r scalars c1, c2, … cr in R, at least one non-zero, such that
𝒄𝟏 𝜶𝟏 + 𝒄𝟐 𝜶𝟐 + ⋯ + 𝒄𝒓 𝜶𝒓 = 𝜽
If 𝛼+ , 𝛼, , … , 𝛼8 are not linearly dependent then they are called linearly independent (or simply independent);
that is 𝛼+ , 𝛼, , … , 𝛼8 are linearly independent if
𝒄𝟏 𝜶𝟏 + 𝒄𝟐 𝜶𝟐 + ⋯ + 𝒄𝒓 𝜶𝒓 = 𝜽
⇒ 𝒄𝟏 = 𝒄𝟐 = ⋯ = 𝒄𝒓 = 𝟎
Test of linear dependency and independency by using elementary row operation on Matrix:
The Rank of A = Maximum number of independent vectors among 𝛼+ , 𝛼, , … , 𝛼> .
Corollary. All the vectors 𝛼+ , 𝛼, , … , 𝛼> are independent if and only if Rank of A= m. If Rank of A < m then
the vectors 𝛼+ , 𝛼, , … , 𝛼> are dependent. Obviously Rank of A can not be greater than m.

Tending to Infinity 9
Exclusive PYQ Courses: Click here!

Generator or Spanning vectors: Let V be a vector space over R. Also let 𝛼+ , 𝛼, , … , 𝛼8 be r number of
vectors in a vector space V over R and S be a subspace of V(S may be equal to V). If every element of S can
be expressed a linear combination of the vectors 𝛼+ , 𝛼, , … , 𝛼8 then we say 𝛼+ , 𝛼, , … , 𝛼8 generate or span the
subspace S.
Basis: Let V be a vector space over R. The set of vectors {𝛼+ , 𝛼, , … , 𝛼8 } is said ot be a basis of V, if
𝛼+ , 𝛼, , … , 𝛼8 are linearly independent and fi they generate V.
Dimension or Rank of a Vector Space: The number of vectors present in abasis of a vector space V is called
the dimension of V. It is denoted by dim (V).
Basis & Dimension of Subspace: Since a subspace S of a vector space V is basically a vector space so the
subspace must have a basis and consequently a dimension or rank.
Thus 𝐴 = {𝛼+ , 𝛼, , … , 𝛼8 } will be abasis of a subspace S of V if A is linearly independent and every vector of
S can be expressed as a linear combination of 𝛼+ , 𝛼, , … , 𝛼8 Since the basis of A contains r number of vectors
so dim(S) = r.

Extension theorem: A linearly independent set of vectors can be extended to a basis if it is not a basis itself.
Replacement theorem: If the set {𝜶𝟏 , 𝜶𝟐 , … , 𝜶𝒏 } be a basis of a vector space V over R and if 𝜷 be a non-null
vector in V such that ß can be expressed as 𝜷 = 𝒄𝟏 𝜶𝟏 + 𝒄𝟐 𝜶𝟐 + ⋯ + 𝒄𝒏 𝜶𝒏 where the scalar 𝒄𝒋 ≠ 𝟎 then if 𝜶𝒋
is replaced by 𝜷 the set {𝜶𝟏 , 𝜶𝟐 , … , 𝜶𝒋%𝟏 , 𝜷, 𝜶𝒋4𝟏 , … , 𝜶𝒏 } , will also be a basis of V.

Linear Transformation (Mapping):


Introduction: Let A and B be two non-empty sets. A law or rule T is said to be a transformation or mapping
from A to B if to each element x in A we get a definite element y in B by this rule. We denote if by 𝑻: 𝑨 →
𝑩 𝒂𝒏𝒅 𝑻(𝒙) = 𝒚.
Domain and Co-domain of a Transformation:
If 𝑻: 𝑨 → 𝑩 is a transformation then the set A is called domain and the set B is called co-domain of T.
Image of a Transformation: If 𝑻: 𝑨 → 𝑩 be a transformation then for any element x in A we get a definite
element y in B. We write 𝑻(𝒙) = 𝒚. Here y is called image of x by T. In other words T(x) is an image of x. If
𝑥 ∈ 𝐴 𝑡ℎ𝑒𝑛 Т(х) ∈ В.
Image-set or Range: Let 𝑻: 𝑨 → 𝑩 be a transformation. Set of all images by T is a subset of B. This subset is
called image set of T. It is denoted by T(A) or Im(T)
Linear Transformation: Let V and W be two vector spaces over the Real Field. A transformation 𝑻: 𝑽 → 𝑾
is said to be linear transformation if
(i) 𝑻(𝜶 + 𝜷) = 𝑻(𝜶) + 𝑻(𝜷) 𝒇𝒐𝒓 𝒂𝒍𝒍 𝜶, 𝜷 𝒊𝒏 𝑽
(ii) 𝑻(𝒄. 𝜶) = 𝒄. 𝑻(𝜶) 𝒇𝒐𝒓 𝒂𝒍𝒍 𝒄 𝒊𝒏 𝑹 & 𝒇𝒐𝒓 𝒂𝒍𝒍 𝜶 𝒊𝒏 𝑽 .

Null Space or Kernel of a Linear Transformation.: Let 𝑻: 𝑽 → 𝑾 be a linear transformation where V and
W are two vector spaces. Let 𝜃 and 𝜃′ be the null vectors of V and W respectively. The set of all elements 𝛼
of V for which
𝑻(𝜶) = 𝜽'
is called Kernel of T. It is denote by or Ker(T). So 𝒌𝒆𝒓 (𝑻) = {𝜶: 𝑻(𝜶) = 𝜽′}.
Nullity of a linear transformation:
Let 𝑻: 𝑽 → 𝑾 be a linear transformation. We have seen its kernel 𝒌𝒆𝒓 (𝑻) is a subspace of V. The dimension
of 𝒌𝒆𝒓 (𝑻) is called the Nullity of T.
Rank of a Linear Transformation:
Tending to Infinity 10
Exclusive PYQ Courses: Click here!
Let 𝑻: 𝑽 → 𝑾 be a linear transformation: We have seen the image set T(V) is a subspace of the vector space
W. The dimension of this subspace T(V.) or Im(T) is called the Rank of T.
Sylvester's Law:
If T:V →W is a linear transformation, where V and W are two vector space, then
𝑵𝒖𝒍𝒍𝒊𝒕𝒚 𝒐𝒇 (𝑻) + 𝑹𝒂𝒏𝒌 𝒐𝒇 (𝑻) = 𝒅𝒊𝒎 (𝑽).
Matrix Representation of Linear Transformation:
Theorem: If T:V →W be a linear transformation represented by the matrix A (V and W are finite
dimensional vector space) then Rank of T = Rank of A matrix.
INVERSE OF LINEAR TRANSFORMATION:
Let 𝑻: 𝑽 → 𝑾 be a linear transformation. T-1 exists if T is both One-one and Onto.
(i) If 𝑲𝒆𝒓(𝑻) = {𝜽} ⟺ 𝑇 𝑖𝑠 𝑜𝑛𝑒 − 𝑜𝑛𝑒.
(ii) If 𝒅𝒊𝒎(𝑽) = 𝒅𝒊𝒎(𝑾) & 𝑻 𝒊𝒔 𝒐𝒏𝒆 − 𝒐𝒏𝒆 ⟹ 𝑇 𝑖𝑠 𝑜𝑛𝑡𝑜.

Matrix of Inverse map: Let T:V →W be a linear transformation represented by the matrix A (V is a finite
dimensional vector space). If the inverse of the matrix A exists, the transformation T-1:T(V)→V exist and is
represented by the inverse matrix A-1

Eigenvalues & Eigenvectors:


Characteristic Equation: Let A be an 𝒏 × 𝒏 Matrix over R/C. Then 𝒅𝒆𝒕 (𝑨 − 𝝀)𝑰𝒏 is said to be the
Characteristic Polynomial of A. If Characteristic Polynomial of a matrix A is denoted by 𝝍𝑨 (𝒙) = 𝟎 , is said
to the characteristic equation of A.
Eigenvalue of a matrix: A root of the characteristic equation of a square matrix A is said to be an eigen value
(or a characteristic value) of A.
Eigenvector of a matrix: Let A be an 𝒏 × 𝒏 Matrix over R/C. A non-null vector X is said to be an
eigenvector or a characteristic vector of A if there exists a scalar belonging to R/C such that
𝑨𝑿 = 𝝀𝑿
holds.
Theorem 1. The product of the eigen values of a square matrix A is det(A).
Theorem 2. The sum of the eigen values of a square matrix A is Trace(A).
Theorem 3. If A be a singular matrix, 0 is an eigen value of A.
Theorem 4. The eigen values of a diagonal matrix are its diagonal elements.
Note:
1. To an eigenvector of A there corresponds an unique eigenvalue of A.
2. To an eigenvalue of A there corresponds at least one eigenvector.
3. r eigen vectors of a matrix A corresponding to r distinct eigenvalues are linearly independent.
Cayley - Hamilton theorem: Every square matrix satisfies its own characteristic equation

Inner Product Space:


Real Inner Product Space: Let V be a vector space on Real Field R. A rule under which we get a real
number, denoted by (𝛼, 𝛽), corresponding to a pair of vectors 𝛼, 𝛽 in V is called inner product on V if the
rule obeys the following axioms:
1) (𝛼, 𝛽) = (𝛽, 𝛼) for all 𝛼, 𝛽 in V. [Symmetry]
2) (𝑐 ∙ 𝛼, 𝛽) = 𝑐 ∙ (𝛼, 𝛽)for all 𝛼, 𝛽 in V and for all c in R (real field) [Homogeneity]
3) (𝛼, 𝛽 + 𝛾) = (𝛼, 𝛽) + (𝛼, 𝛾) for all 𝛼, 𝛽, 𝛾 in V [Linearity]
4) (𝛼, 𝛼) > 0 𝑖𝑓 𝛼 ≠ 𝜃 where 𝜃 is null vector of V [Positivity]

Tending to Infinity 11
Exclusive PYQ Courses: Click here!
The vector space V together with an inner product defined on it is called Real inner product space or
Euclidean Space.
Complex Inner Product Space: Let V be a vector space on Complex Field C. A rule under which we geta
real number, denoted by (𝛼, 𝛽), corresponding ot a pair of vectors 𝛼, 𝛽 in V is called inner product on V if
the rule obeys the following axioms:
(𝛼, 𝛽) = dddddddd
(𝛽, 𝛼) for all 𝛼, 𝛽 in V. [Symmetry]
(𝑐 ∙ 𝛼, 𝛽) = 𝑐 ∙ (𝛼, 𝛽)for all 𝛼, 𝛽 in V and for all c in C. [Homogeneity]
(𝛼, 𝛽 + 𝛾) = (𝛼, 𝛽) + (𝛼, 𝛾) for all 𝛼, 𝛽, 𝛾 in V [Linearity]
(𝛼, 𝛼) > 0 𝑖𝑓 𝛼 ≠ 𝜃 where 𝜃 is null vector of V [Positivity]
The vector space V together with an inner product defined on it is called Complex inner product space or
Unitary Space.
Norm of Vector: Let V be an inner product space (Real or Complex). The norm of a vector 𝛼 in V, denoted
by ||𝛼||, is defined as Ô|𝛼|Ô = œ(𝛼, 𝛼), where (𝛼, 𝛽) is inner product between the vectors 𝛼 & 𝛽. The square
root is a positive square root. Since (𝛼, 𝛼) > 0 so ||𝛼|| is a positive real number.
Example:
Properties: Let 𝛼 be any vector in an inner product space V(Real or Complex), whose null vector is 𝜃 then
(1) (𝜃, 𝛼) = (𝛼, 𝜃) = 0
(2) ||𝑐 ∙ 𝑎|| = |𝑐|||𝛼|| where c is any real
(3) Ô|𝛼|Ô > 0 𝑖𝑓 𝛼 ≠ 𝜃.
(4) ||𝜃|| = 0.
Theorem: (Schwarz's Inequality) : For any two vectors 𝛼, 𝛽 in a Euclidean Space
|(𝜶, 𝜷)| ≤ Ô|𝜶|Ô ||𝜷||
Theorem: (Triangle Inequality) : For any two vectors 𝛼, 𝛽 in a Euclidean Space
||𝛼 + 𝛽|| ≤ Ô|𝛼|Ô + ||𝛽||
Theorem. (Parallelogram Law) : For any two vectors 𝛼, 𝛽 in a Euclidean Space
, , , ,
Ô|𝛼 + 𝛽|Ô + Ô|𝛼 − 𝛽|Ô = 2(Ô|𝛼|Ô + Ô|𝛽|Ô )
Orthogonal Set of Vectors: A set of vectors {𝛼1, 𝛼2, … , 𝛼𝑛} in an inner product space is said to be
orthogonal set if any two distinct vectors in the set are orthogonal; that is, (𝛼𝑖, 𝛼𝑗) = 0 whenever 𝑖 ≠ 𝑗
Orthonormal Set of Vectors: A set of vectors {𝛼1, 𝛼2, … , 𝛼𝑛} ni an inner product space is said ot be
orthonormal set if any two distinct vectors in the set are orthogonal and norm of every vector is one;
that is, (𝛼𝑖, 𝛼𝑗) = 0 whenever 𝑖 ≠ 𝑗 𝑎𝑛𝑑 Ô|𝛼𝑖|Ô = 1 , 𝑓𝑜𝑟 𝑖 = 1,2,3, … , 𝑛.
The vector whose norm is 1 is called unit vector.
Note: An orthogonal set of vectors may contain null vector but an orthonormal set of vectors can not contain
null vector because norm of a null vector is not 1.
Scalar Component & Projection of a Vector: Let 𝛼 & 𝛽 be two vectors in a Euclidean space V, with 𝛽 ≠ 0.
(V,X)
The scalar 𝑡 = (X, X) . is called the scalar component (or component) of a along B and the vector t . 𝛽 is called
projection of 𝛼 upon 𝛽.
Angle between two vectors: If 𝛼 & 𝛽 be two non-zero vectors in a Euclidean space, the angle 𝜃 between
𝛼 & 𝛽 is defined as
(𝛼, 𝛽)
𝜃 = cos %+
Ô|𝛼|Ô Ô|𝛽|Ô
Gram-Schmidt Orthogonalization Process
ALL THE BEST!!
Tending to Infinity 12

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy