RVSP Unit 1
RVSP Unit 1
RVSP Unit 1
UNIT-I
THE RANDOM VARIABLE
PROBABILITY
Probability theory is used in all those situations where there is
randomness about the occurrence of an event.
(Or)
Measure of the likeliness that an event will occur.
The meaning of randomness is existence of certain amount of unsureness
about an event.
This section deals with the chances of occurring any particular
phenomena. i.e. Electron emission, Telephone calls, Radar detection,
quality control, system failure, games of chance, birth and death rates,
Random walks, probability of detection, Probability of false alarm, BER
calculation, optimal coding (Huffman) and many more.
Experiment:-
A random experiment is an action or process that leads to one of several
possible outcomes
Sample Space:
The sample space is the collection of all possible outcomes of a random
experiment. The elements of are called sample points.
A sample space may be finite, countably infinite or uncountable.
A finite or countably infinite sample space is called a discrete
sample space.
An uncountable sample space is called a continuous sample space
Types of Sample Space:
Finite/Discrete Sample Space:
Consider the experiment of tossing a coin twice. The sample space can be
S = {HH, HT, T H , TT} the above sample space has a finite number of
sample points. It is called a finite sample space.
Countably Infinite Sample Space:
Consider that a light bulb is manufactured. It is then tested for its life
length by inserting it into a socket and the time elapsed (in hours) until it
burns out is recorded.
Let the measuring instrument is capable of recording time to two decimal
places, for example 8.32 hours.
Now, the sample space becomes count ably infinite i.e.
S = {0.0, 0.01, 0.02}
The above sample space is called a countable infinite sample space.
Uncountable / Infinite Sample Space/Continuous Sample Space
If the sample space consists of unaccountably infinite number of
elements then it is called uncountable/ Infinite Sample Space.
Event
An event is simply a set of possible outcomes. To be more specific, an
event is a subset A of the sample space S.
Example 1: tossing a fair coin
Example 2: Throwing a fair die
Types of Events:
Exhaustive Events:
A set of events is said to be exhaustive, if it includes all the possible
events.
Ex. In tossing a coin, the outcome can be either Head or Tail and there is
no other possible outcome. So, the set of events { H , T } is exhaustive.
Ex. In tossing a die, both head and tail cannot happen at the same time.
Independent Events:
Two events are said to be independent, if happening or failure of one
does not affect the happening or failure of the other. Otherwise, the
events are said to be dependent.
If two events, A and B are independent then the joint probability is
Axioms of Probability
For any event A, we assign a number P(A), called the probability of the event
A. This number satisfies the following conditions that act the axioms of
probability.
(i) P( A) ≥ 0 (Probabili ty is a nonnegativ e number)
(ii) P(s) = 1 (Probabili ty of the whole set is unity)
Note that (iii) states that if A and B are mutually exclusive (M.E.) events, the
probability of their union is the sum of their probabilities
Relative Frequency
Random experiment with sample space S. we shall assign non-negative
number called probability to each event in the sample space.
Let A be a particular event in S. then “the probability of event A” is
denoted by P(A).
Suppose that the random experiment is repeated n times, if the event A
occurs nA times, then the probability of event A is defined as "Relative
frequency"
• Relative Frequency Definition:
The probability of an event A is defined as
Joint probability
Joint probability is defined as the probability of joint (or) simultaneous
occurrence of two (or) more events.
Let A and B taking place, and is denoted by P (AB) or P(A∩B)
P (AB) = P(A∩B = P(A) + P(B) - P(AUB)
𝐹𝑋 (𝑥 ) = P{𝑋 ≤ 𝑥 }
𝐹𝑋 (∞) = P{𝑋 ≤ ∞}
𝐹𝑋 (∞) = P{𝑋 ≤ −∞} + ⋯ + P{𝑋 ≤ 0} + P{𝑋 ≤ 1} + ⋯ P{𝑋 ≤ ∞}
𝐹𝑋 (∞) = 1
Proof:
∞ ∞
𝑑
∫ 𝑓𝑋 (𝑥 )𝑑𝑥 = ∫ 𝐹𝑋 (𝑥 )
−∞ −∞ 𝑑𝑥
[𝐹𝑋 (𝑥 )]∞
−∞ = 𝐹𝑋 (∞ ) − 𝐹𝑋 (−∞ )
=1−0 =1
3. The probability distribution function can be obtained from the
knowledge of density function. It means distribution function is the area
under the density function.
𝑥
𝐹𝑋 (𝑥 ) = ∫ 𝑓𝑋 (𝑥 )𝑑𝑥
−∞
Proof
𝑑
𝑓𝑋 (𝑥 ) = 𝐹 (𝑥 )
𝑑𝑥 𝑋
Integrating on both sides
𝑥 𝑥
𝑑
∫ 𝑓𝑋 (𝑥 )𝑑𝑥 = ∫ 𝐹𝑋 (𝑥 )
−∞ −∞ 𝑑𝑥
𝑥
= [𝐹𝑋 (𝑥 )]−∞
= 𝐹𝑋 (𝑥 ) − 𝐹𝑋 (−∞) = 𝐹𝑋 (𝑥 )
𝑥
𝐹𝑋 (𝑥 ) = ∫−∞ 𝑓𝑋 (𝑥 )𝑑𝑥
4. The Probability of an event {𝑥1 < 𝑋 ≤ 𝑥2 } can be obtained from the
knowledge of Probability distribution function
𝑥2
𝑃{𝑥1 < 𝑋 ≤ 𝑥2 } = ∫ 𝑓𝑋 (𝑥 )𝑑𝑥
𝑥1
Proof:
From Probability distribution function
𝑃 {𝑥1 < 𝑋 ≤ 𝑥2 } = 𝐹𝑋 (𝑥2 ) − 𝐹𝑋 (𝑥1 )
𝑥2 𝑥1
= ∫ 𝑓𝑋 (𝑥 )𝑑𝑥 − ∫ 𝑓𝑋 (𝑥 )𝑑𝑥
−∞ −∞
𝑥1 𝑥2 ∞ 𝑥1
= ∫ 𝑓𝑋 (𝑥 )𝑑𝑥 + ∫ 𝑓𝑋 (𝑥 )𝑑𝑥 + ∫ 𝑓𝑋 (𝑥 )𝑑𝑥 − ∫ 𝑓𝑋 (𝑥 )𝑑𝑥
−∞ 𝑥1 𝑥2 −∞
𝑥2
= ∫ 𝑓𝑋 (𝑥 )𝑑𝑥
𝑥1
𝑥2
𝑃{𝑥1 < 𝑋 ≤ 𝑥2 } = ∫ 𝑓𝑋 (𝑥 )𝑑𝑥
𝑥1
𝑓𝑋 (𝑥 ) = ∑ 𝑁𝐶𝑘 𝑃𝑘 (1 − 𝑃)𝑁−𝑘 𝛿 (𝑥 − 𝑘 )
𝑘=1
Applications
It is mostly applied to counting type problems
The no.of telephone calls made during a period of time
The no.of defective elements in a given samples
The no.of items waiting in a queue
𝐹𝑋 (𝑎) = 0
𝑏−𝑎
𝐹𝑋 (𝑏) = =1
𝑏−𝑎
0 𝑥<𝑎
𝑥−𝑎
𝐹𝑋 (𝑥 ) = { 𝑎≤𝑋<𝑏
𝑏−𝑎
1 𝑥≥𝑏
Applications
The random distribution of errors introduced in the round off process is
uniformly distributed.
In digital communications during sampling process.
0 𝑥<𝑎
where a and b are real constants −∞ < 𝑎 < ∞ 𝑎𝑛𝑑 𝑏 > 0
Probability distribution function
𝑥
𝐹𝑋 (𝑥 ) = ∫ 𝑓𝑋 (𝑥 )𝑑𝑥
−∞
𝑥 1 −(𝑥−𝑎)
𝐹𝑋 (𝑥 ) = ∫ 𝑒 𝑏 𝑑𝑥
−∞ 𝑏
𝑎 𝑥
1 −(𝑥−𝑎) 1 −(𝑥−𝑎)
𝐹𝑋 (𝑥 ) = ∫ 𝑒 𝑏 𝑑𝑥 + ∫ 𝑒 𝑏 𝑑𝑥
−∞ 𝑏 𝑎 𝑏
𝑥−𝑎
−( )
𝐹𝑋 (𝑥 ) = 1 − 𝑒 𝑏
0 𝑥<𝑎
𝑥−𝑎
𝐹𝑋 (𝑥 ) = { 1 − −(
𝑒 𝑏
)
𝑥>𝑎
1 𝑥=∞
Applications
The distribution of fluctuations in signal strength received by radar
receivers from certain types of targets.
The distribution of raindrop sizes when a large number of rainstorm
measurements are made.
0 𝑥<𝑎
(𝑥−𝑎)2
𝐹𝑋 (𝑥 ) = {1 − 𝑒 − 𝑏
𝑥≥𝑎
1 𝑥=∞
Applications
It describes the envelope of white noise, when the noise is passed
through a band pass filter.
Some types of signal fluctuations received by receivers are modelled as
Rayleigh distribution
If X is a discrete RV
𝑁
𝐸 [𝑋] = 𝑋̅ = ∑ 𝑥𝑖 𝑃(𝑥𝑖 )
𝑖=1
Properties of Expectations
1. The expected value of a constant is constant.
Proof
∞
𝐸 [𝑋] = ∫ 𝑥. 𝑓𝑋 (𝑥 ) 𝑑𝑥
−∞
∞
𝐸 [𝑘 ] = ∫ 𝑘. 𝑓𝑋 (𝑥 ) 𝑑𝑥
−∞
∞
𝐸 [𝑋] = 𝑘. ∫ 𝑓𝑋 (𝑥 ) 𝑑𝑥
−∞
𝐸 [𝑋] = 𝑘. 1
𝐸 [𝑋] = 𝑘
2. Let 𝐸 [𝑋] be the expected value of a RV ‘X’ then
𝐸 [𝑎𝑋] = 𝑎 𝐸 [𝑋]
Proof
∞
𝐸 [𝑋] = ∫ 𝑥. 𝑓𝑋 (𝑥 ) 𝑑𝑥
−∞
∞
𝐸 [𝑎𝑋] = ∫ 𝑎𝑥. 𝑓𝑋 (𝑥 ) 𝑑𝑥
−∞
∞
= 𝑎 ∫−∞ 𝑥. 𝑓𝑋 (𝑥 ) 𝑑𝑥
𝐸 [𝑎𝑋] = 𝑎 𝐸 [𝑋]
3. Let 𝐸 [𝑋] be the expected value of a RV ‘X’ then
𝐸 [𝑎𝑋 + 𝑏] = 𝑎𝐸 [𝑋] + 𝑏
∞
𝐸 [𝑋] = ∫ 𝑥. 𝑓𝑋 (𝑥 ) 𝑑𝑥
−∞
∞
𝐸 [𝑎𝑋 + 𝑏] = ∫ (𝑎𝑥 + 𝑏). 𝑓𝑋 (𝑥 ) 𝑑𝑥
−∞
∞ ∞
= 𝑎 ∫−∞ 𝑥. 𝑓𝑋 (𝑥 ) 𝑑𝑥+ 𝑏. ∫−∞ 𝑓𝑋 (𝑥 ) 𝑑𝑥
𝐸 [𝑎𝑋] = 𝑎𝐸 [𝑋]+b
Moments:
Moment of a RV describes the deviation from a reference value.
Moments about the origin
Moments about the Mean
Moments about the origin
The expected value of a given function 𝑔(𝑥 ) = 𝑋𝑛
is called nth moment about the origin.
𝑚𝑛 = 𝐸 (𝑋𝑛 )
∞
= ∫ 𝑥 𝑛 . 𝑓𝑋 (𝑥 ) 𝑑𝑥
−∞
∞
𝑚1 = 𝐸 (𝑋) = ∫ 𝑥. 𝑓𝑋 (𝑥 ) 𝑑𝑥
−∞
The first moment about the origin is nothing but mean value (or) expected value
of a random variable.
Second Moment
∞
2)
𝑚2 = 𝐸 ( 𝑋 = ∫ 𝑥 2 . 𝑓𝑋 (𝑥 ) 𝑑𝑥
−∞
= 𝑋̅ − 𝑋̅
=0
Variance:
The second central moment of a random variable ‘X’ is called Variance.
It is defined as the expected value of a function of the form 𝑔(𝑥 ) = (𝑋 − 𝑋̅)2
where 𝑋̅ − 𝑚𝑒𝑎𝑛 𝑣𝑎𝑙𝑢𝑒.
𝜇2 = 𝜎𝑋 2 = 𝑣𝑎𝑟(𝑥)
The variance is used to calculate the average power of a random signal in
communication related applications.
Relation between variance and moments about the origin
∞
𝜎𝑋 2 = 𝐸 (𝑋 − 𝑋̅)2 = ∫ (𝑥 − 𝑋̅)2 𝑓𝑋 (𝑥 ) 𝑑𝑥
−∞
∞
= ∫ (𝑥 2 + 𝑋̅ 2 − 2𝑥𝑋̅) 𝑓𝑋 (𝑥 ) 𝑑𝑥
−∞
∞ ∞ ∞
= 𝑚2 + 𝑋̅ 2 − 2 𝑋̅ 𝑋̅
= 𝑚2 + 𝑚12 − 2 𝑚12
𝜎𝑋 2 = 𝑚2 − 𝑚12
Properties of Variance:
1. The variance of a constant is zero.
𝑣𝑎𝑟[𝑘 ] = 0
Proof:
It is known that 𝑣𝑎𝑟[𝑋] = 𝐸 (𝑋 − 𝑋̅)2
2
𝑣𝑎𝑟[𝑘 ] = 𝐸(𝑘 − 𝑘̅ )
as k is constant 𝑘 = 𝑘̅
𝑣𝑎𝑟[𝑘 ] = 𝐸 (𝑘 − 𝑘 )2
𝑣𝑎𝑟[𝑘 ] = 0
𝜇3 = 𝐸 (𝑋 − 𝑋̅)3 = ∫ (𝑋 − 𝑋̅)3 𝑓𝑋 (𝑥 ) 𝑑𝑥
−∞
It is the degree of distortion from the symmetrical bell curve or the normal
distribution. It measures the lack of symmetry in data distribution.
It differentiates extreme values in one versus the other tail. A symmetrical
distribution will have a skewness of 0.
Positive Skewness means when the tail on the right side of the distribution is
longer or fatter. The mean and median will be greater than the mode.
Negative Skewness is when the tail of the left side of the distribution is longer
or fatter than the tail on the right side. The mean and median will be less than
the mode.
Note:
If the skewness is between -0.5 and 0.5, the data are fairly symmetrical.
If the skewness is between -1 and -0.5(negatively skewed) or between 0.5
and 1(positively skewed), the data are moderately skewed.
If the skewness is less than -1(negatively skewed) or greater than
1(positively skewed), the data are highly skewed.
Coefficient of skewness:
It is defined as the ratio of 3rd central moment to cube of standard deviation.
𝜇3
= 3
𝜎𝑋
1 ∞ −𝑗𝜔𝑋
𝑓𝑋 (𝑥 ) = ∫ 𝑒 𝜙𝑥 (𝜔) dw
2𝜋 −∞
𝜙𝑥 (𝜔) 𝑎𝑛𝑑 𝑓𝑋 (𝑥 ) are Fourier transform pairs with the sign of the variable is
reversed.
𝑠𝑖𝑛𝑐𝑒 |𝑒 𝑗𝜔𝑋 | = 1
∞
|𝜙𝑋 (𝜔)| ≤ ∫ | 𝑓𝑋 (𝑥 )| dx
−∞
|𝜙𝑋 (𝜔)| ≤ 1
5. The nth moment of random variable can be obtained from the knowledge
of characteristic function is
𝑑 𝑛 𝜙𝑋 (𝜔)
𝑚𝑛 = (−𝑗 )𝑛 |
𝑑𝜔 𝑛 𝜔=0
Proof:
Consider 𝜙𝑋 (𝜔) = 𝐸[𝑒 𝑗𝜔𝑋 ]
∞
𝜙𝑋 (𝜔) = ∫ 𝑒 𝑗𝜔𝑥 𝑓𝑋 (𝑥 ) dx
−∞
𝑑 𝑛 𝜙𝑋 (𝜔) ∞
𝑛
| = (𝑗) ∫ (𝑥 )𝑛 𝑓𝑋 (𝑥 ) dx
𝑛
𝑑𝜔 𝜔=0 −∞
= (𝑗 ) 𝑛 𝑚 𝑛
𝑑 𝑛 𝜙𝑋 (𝜔)
𝑚𝑛 = (−𝑗 )𝑛 |
𝑑𝜔 𝑛 𝜔=0
𝑀𝑋 (0) = 1
𝑑 𝑛 𝑀𝑋 (𝜈 ) ∞
𝑛 | = ∫ (𝑥 )𝑛 𝑓𝑋 (𝑥 ) dx = 𝑚𝑛
𝑑𝑣 𝑣=0 −∞
𝑑 𝑛 𝑀𝑋 (𝜈 )
𝑚𝑛 = |
𝑑𝑣 𝑛 𝑣=0
Let there exist a random variable X such that 𝑥 = 𝑥0 then 𝑦0 = 𝑇(𝑥0 ) (or)
𝑥0 = 𝑇 −1(𝑦0 )
P{𝑌 ≤ 𝑦0 } = P{𝑋 ≤ 𝑥0 }
𝐹𝑌 (𝑦0 ) = 𝐹𝑋 (𝑥0 )
𝑥
We know that 𝐹𝑋 (𝑥 ) = ∫−∞ 𝑓𝑋 (𝑥 )𝑑𝑥
{𝑌 ≤ 𝑦0 } = {𝑋 ≥ 𝑥0 }
P{𝑌 ≤ 𝑦0 } = P{𝑋 ≥ 𝑥0 }
P{𝑌 ≤ 𝑦0 } = 1 − P{𝑋 < 𝑥0 }
𝐹𝑌 (𝑦0 ) = 1 − 𝐹𝑋 (𝑥0 )
𝑥
We know that 𝐹𝑋 (𝑥 ) = ∫−∞ 𝑓𝑋 (𝑥 )𝑑𝑥
𝑦0 𝑥0
∫ 𝑓𝑌 (𝑦)𝑑𝑦 = 1 − ∫ 𝑓𝑋 (𝑥 )𝑑𝑥
−∞ −∞
P{𝑌 ≤ 𝑦0 } = P{𝑋/𝑌 ≤ 𝑦0 }
𝐹𝑌 (𝑦0 ) = ∫ 𝑓𝑋 (𝑥 )𝑑𝑥
𝑋/𝑌≤𝑦0
𝑑 𝑑
𝐹𝑌 (𝑦0 ) = ∫ 𝑓 (𝑥 )𝑑𝑥
𝑑𝑦 𝑑𝑦 𝑋/𝑌≤𝑦0 𝑋
𝑓𝑋 (𝑥𝑛 )
𝑓𝑌 (𝑦) = ∑
𝑑𝑇(𝑥)
𝑛 | |
𝑑𝑥
𝑑𝑥1 𝑑𝑥2 𝑑𝑥3
𝑓𝑌 (𝑦) = 𝑓𝑋 (𝑥1 ) | | + 𝑓𝑋 (𝑥2 ) | | + 𝑓𝑋 (𝑥3 ) | | ….
𝑑𝑦 𝑑𝑦 𝑑𝑦
If X is a discrete RV
𝑁
𝐸 [𝑋] = 𝑋̅ = ∑ 𝑥𝑖 𝑃(𝑥𝑖 )
𝑖=1
−𝜆
𝜆0 𝜆1 𝜆2
=𝜆𝑒 ( + + +⋯.)
0! 1! 2!
= 𝜆 𝑒 −𝜆 𝑒 𝜆
𝐸 [𝑋] = 𝑚1 = 𝜆
VARIANCE:
𝜎𝑋 2 = 𝑚2 − 𝑚12
𝑁
𝐸 (𝑋2 ) = 𝑚2 = ∑ 𝑥𝑖 2 𝑃(𝑥𝑖 )
𝑖=1
∞
2 −𝜆
𝜆𝑥
𝑚2 = ∑ 𝑥 𝑒
𝑥!
𝑥=0
𝑥 2 = 𝑥 (𝑥 − 1) + 𝑥
∞
−𝜆
𝜆𝑥
= ∑ (𝑥 (𝑥 − 1) + 𝑥 ) 𝑒
𝑥!
𝑥=0
∞ ∞
−𝜆
𝜆𝑥 −𝜆
𝜆𝑥
( )
= ∑𝑥 𝑥−1 𝑒 +∑𝑥𝑒
𝑥! 𝑥!
𝑥=0 𝑥=0
2 −𝜆
𝜆0 𝜆1 𝜆2 −𝜆
𝜆0 𝜆1 𝜆2
=𝜆 𝑒 ( + + +⋯.)+ 𝜆 𝑒 ( + + +⋯.)
0! 1! 2! 0! 1! 2!
= 𝜆2 𝑒 −𝜆 𝑒 𝜆 + 𝜆 𝑒 −𝜆 𝑒 𝜆
= 𝜆2 + 𝜆
Variance
𝜎 2 = 𝑚2 − 𝑚12
𝜎 2 = 𝜆2 + 𝜆 − 𝜆2
𝜎2 = 𝜆
For the Poisson random variable
𝐸 [𝑋] = 𝝈𝟐 = 𝜆
1 𝑏 2 − 𝑎2
=
2 𝑏−𝑎
1 (𝑏 − 𝑎)(𝑏 + 𝑎)
=
2 𝑏−𝑎
(𝑏 + 𝑎)
𝑚1 = 𝐸 [𝑋] =
2
VARIANCE:
𝜎𝑋 2 = 𝑚2 − 𝑚12
∞
2)
𝑚2 = 𝐸 ( 𝑋 = ∫ 𝑥 2 . 𝑓𝑋 (𝑥 ) 𝑑𝑥
−∞
𝑏
1
= ∫ 𝑥2 𝑑𝑥
𝑎 𝑏−𝑎
𝑏
1
= ∫ 𝑥 2 𝑑𝑥
𝑏−𝑎 𝑎
𝑏
1 𝑥3
= ]
𝑏−𝑎 3 𝑎
1 𝑏 3 − 𝑎3
=
3 𝑏−𝑎
1 (𝑏 − 𝑎)(𝑏2 + 𝑎2 + 𝑎𝑏)
=
3 𝑏−𝑎
(𝑏 2 + 𝑎2 + 𝑎𝑏)
𝑚2 =
3
Variance
𝜎 2 = 𝑚2 − 𝑚12
2
(𝑏2 + 𝑎2 + 𝑎𝑏) 𝑏+𝑎 2
𝜎 = −( )
3 2
2
(𝑏2 + 𝑎2 + 𝑎𝑏) (𝑏2 + 𝑎2 + 2𝑎𝑏)
𝜎 = −
3 4
2
(𝑏 2 + 𝑎2 − 2𝑎𝑏)
𝜎 =
12
2
(𝑏 − 𝑎)2
𝜎 =
12
For the Uniform random variable
(𝑏 + 𝑎)
𝐸 [𝑋] =
2
2
(𝑏 − 𝑎)2
𝜎 =
12
PROBLEMS
1. Find the constant ‘b’ so that the given density function is a valid function.
3𝑥⁄
𝑓𝑋 (𝑥 ) = {𝑒
4, 0 < 𝑥 < 𝑏
0, 𝑒𝑙𝑠𝑒 𝑤ℎ𝑒𝑟𝑒
Sol:
∞
𝑤𝑒 𝑘𝑛𝑜𝑤 𝑡ℎ𝑎𝑡 ∫ 𝑓𝑋 (𝑥 )𝑑𝑥 = 1
−∞
𝑏
3𝑥⁄
∫ 𝑒 4 𝑑𝑥 =1
0
𝑏
3𝑥
𝑒 ⁄4
[ ] =1
3
4 0
3𝑏 3
[𝑒 4 − 1] =
4
3𝑏 7
𝑒4 =
4
3𝑏 7
= ln ( )
4 4
b= 0.7461
Sol:
If two or more cars arrive, then weighting line will occur at the pump
𝑃(𝑋 ≥ 2) = 1 − 𝑃 (𝑋 ≤ 1)
= 1 − 𝐹𝑋 (1)
∞
−𝑏
𝑏𝑘
Poisson distribution 𝐹𝑋 (𝑥 ) = 𝑒 ∑
𝑘!
𝑘=0
1
−𝑏
𝑏𝑘
= 1−𝑒 ∑
𝑘!
𝑘=0
50
𝑏=𝜆𝑇= ∗ 1 = 5/6
60
1 5 𝑘
( )
= 1−𝑒 −5/6
∑ 6
𝑘!
𝑘=0
5 5
= 1 − 𝑒 −6 ∗
6
𝑃(𝑋 ≥ 2) = 0.203
1 𝑛(𝑛+1)(𝑛+2)
= [ ]
650 6 6
= 0.14
𝑏)𝑃 (𝑋 > 4) = 1 − 𝑃 (𝑋 ≤ 4)
𝑛2
= 1 − ∑4𝑛=1 = 0.9538
650
𝑃 (6 < 𝑋 ≤ 9)
= 𝐹𝑋 (9) − 𝐹𝑋 (6)
𝑛2 𝑛2
= ∑9𝑛=1 − ∑6𝑛=1 = 0.2984
650 650
𝑘 tan−1 𝑥 ]∞
−∞ = 1
𝑘 (tan−1(∞) − tan−1(−∞)) = 1
𝜋 𝜋 1
𝑘( + ) = 1 ⟹ 𝑘 =
2 2 𝜋
ii) The distribution function FX(x)
𝑥 1
𝐹𝑋 (𝑥 ) = 𝑘 ∫ 2 𝑑𝑥
−∞ 1 + 𝑥
= 𝑘 tan−1𝑥 ]−∞
𝑥
1 𝜋
= (tan−1 (𝑥 ) + )
𝜋 2
= 𝑘 tan−1𝑥 ]∞
0
= 𝑘 (tan−1(∞) − tan−1(0))
1 𝜋
= ( ) = 0.5
𝜋 2
X 1 2 3 4 5
P(X) 0.1 0.2 0.4 0.2 0.1
Sol:
𝑁
= ∑ 𝑥𝑖 𝑃 (𝑥𝑖 )
𝑖=1
𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒 𝜎𝑋 2 = 𝑚2 − 𝑚1 2
𝑁
𝑚2 = ∑ 𝑥𝑖 2 𝑃 (𝑥𝑖 )
𝑖=1
𝑖) 𝑌̅ 𝑖𝑖) ̅̅̅̅
𝑌2 𝑖𝑖𝑖) 𝜎𝑌 2
𝑌̅ = 𝐸 [2𝑋 − 3]
= 2 𝐸 [𝑋] − 3
= 2 (−3) − 3 = −9
= 𝐸 [4𝑋2 − 12𝑋 + 9]
= 4𝐸 [𝑋2 ] − 12𝐸 [𝑋] + 9
= (4 ∗ 11) − (12 ∗ −3) + 9 = 89
𝜎𝑌 2 = 𝑚2 − 𝑚12
𝜎𝑌 2 = 89 − 81 = 8
(OR)
𝜎𝑌 2 = 𝑣𝑎𝑟(𝑌)
= 𝑣𝑎𝑟(2𝑋 − 3)
Since the variance of a constant is zero
𝑣𝑎𝑟[𝑎𝑋] = 𝑎2 𝑣𝑎𝑟[𝑋]
= 4𝑣𝑎𝑟(𝑋)
=4∗2 =8
1
𝑀𝑋 (𝜈 ) =
1−𝑣
𝑑 𝑛 𝑀𝑋 (𝜈 )
𝑤𝑒 𝑘𝑛𝑜𝑤 𝑡ℎ𝑎𝑡 𝑚𝑛 = |
𝑑𝑣 𝑛 𝑣=0
𝑑𝑀𝑋 (𝜈 )
𝑚1 = |
𝑑𝑣 𝑣=0
1
= | =1
(1 − 𝑣 )2 𝑣=0
8. Find the characteristic function of a uniformly distributed random
variable X in the range[0,1] and hence find out 𝑚1
Sol:
We know that the Probability density function for uniform random
variable is
1
𝑓𝑋 (𝑥 ) = 𝑎≤𝑋≤𝑏
𝑏−𝑎
=0 𝑜𝑡ℎ𝑒𝑤𝑖𝑠𝑒
in the range[0,1] a=0, b=1
𝑓𝑋 (𝑥 ) = 1 0 ≤ 𝑋 ≤ 1
=0 𝑜𝑡ℎ𝑒𝑤𝑖𝑠𝑒
The characteristic function is
∞
𝜙𝑋 (𝜔) = ∫ 𝑒 𝑗𝜔𝑋 𝑓𝑋 (𝑥 ) dx
−∞
1 𝑒 𝑗𝜔𝑋
=∫ dx
0 𝑗𝜔
1
𝑒 𝑗𝜔𝑋
= ]
𝑗𝜔 0
1 𝑗𝜔
𝜙𝑋 (𝜔) = [𝑒 − 1 ]
𝑗𝜔
To find out 𝑚1
𝑑 𝑛 𝜙𝑋 (𝜔)
𝑚𝑛 = (−𝑗 )𝑛 |
𝑑𝜔 𝑛 𝜔=0
𝑑𝜙𝑋 (𝜔)
𝑚1 = (−𝑗) |
𝑑𝜔 𝜔=0
𝑑𝜙𝑋 (𝜔) 𝑑 𝑒 𝑗𝜔 − 1
= ⌈ ⌉
𝑑𝜔 𝑑𝜔 𝜔
1
=
2