Block 2
Block 2
Block 2
5.1 INTRODUCTION
In the previous units, we have studied the assignment and computation of
probabilities of events in detail. In those units, we were interested in knowing
the occurrence of outcomes. In the present unit, we will be interested in the
numbers associated with such outcomes of the random experiments. Such an
interest leads to study the concept of random variable.
In this unit, we will introduce the concept of random variable, discrete and
continuous random variables in Sec. 5.2 and their probability functions in Secs.
5.3 and 5.4.
Objectives
A study of this unit would enable you to:
define a random variable, discrete and continuous random variables;
specify the probability mass function, i.e. probability distribution of
discrete random variable;
specify the probability density function, i.e. probability function of
continuous random variable; and
define the distribution function.
6
Random Variables
5.3 DISCRETE RANDOM VARIABLE AND
PROBABILITY MASS FUNCTION
Discrete Random Variable
A random variable is said to be discrete if it has either a finite or a countable
number of values. Countable number of values means the values which can
be arranged in a sequence, i.e. the values which have one-to-one
correspondence with the set of natural numbers, i.e. on the basis of three-four
successive known terms, we can catch a rule and hence can write the
subsequent terms. For example suppose X is a random variable taking the
values say 2, 5, 8, 11, … then we can write the fifth, sixth, … values,
because the values have one-to-one correspondence with the set of natural
numbers and have the general term as 3n 1 i.e. on taking n = 1, 2, 3, 4, 5, …
we have 2, 5, 8, 11, 14,…. So, X in this example is a discrete random
variable. The number of students present each day in a class during an
academic session is an example of discrete random variable as the number
cannot take a fractional value.
Probability Mass Function
Let X be a r.v. which takes the values x1, x2, ... and let P X x i = p(xi). This
function p(xi), i =1,2, … defined for the values x1, x2, … assumed by X is
called probability mass function of X satisfying p(xi) 0 and p x i 1 .
i
X x1 x2 x3 …
p( x ) p( x1 ) p( x 2 ) p( x 3 )…
(ii)
X 0 1 2
p( x ) 3 1 3
4 2 4
7
Random Variables and (iii)
Expectation
X 0 1 2
p( x ) 1 1 1
4 2 4
(iv)
X 0 1 2 3
p( x ) 1 3 1 1
8 8 4 8
Solution:
(i) Here p( x i ) 0, i = 1, 2; but
2
1 3 5
px
i 1
i = p( x1 ) + p( x 2 ) = p(0) + p(1) = 1.
2 4 4
2
So, the given distribution is not a probability distribution as p x is
i 1
i
greater than 1.
1
(ii) It is not probability distribution as p(x2) = p(1) = i.e. negative
2
(iii) Here, p(x i ) 0 , i = 1, 2, 3
3
1 1 1
and p x p x p x p x p 0 p 1 p 2 4 2 4 1 .
i 1
i 1 2 3
p x = p( x ) + p( x
i 1
i 1 2 ) + p( x3 ) + p( x 4 )
1 3 1 1 7
= p(0) + p(1) + p(2) + p(3) = 1.
8 8 4 8 8
The given distribution is not probability distribution.
Example 2: For the following probability distribution of a discrete r.v. X, find
i) the constant c,
ii) P[X 3] and
iii) P[1 < X < 4].
X 0 1 2 3 4 5
p( x ) 0 c c 2c 3c c
8
Solution: Random Variables
px 1
i
i
1
0 + c + c + 2c + 3c + c = 1 8 c = 1 c =
8
ii) P[X 3] = P[X = 3] + P[X = 2] + P[X = 1] + P[X = 0]
1 1
= 2c + c+ c + 0 = 4 c = 4 .
8 2
1 3
iii) P[1 < X < 4] = P[X = 2] + P[X = 3] = c + 2c = 3c = 3 .
8 8
Example 3: Find the probability distribution of the number of heads when
three fair coins are tossed simultaneously.
Solution: Let X be the number of heads in the toss of three fair coins.
As the random variable, “the number of heads” in a toss of three coins may be
0 or 1 or 2 or 3 associated with the sample space
{HHH, HHT, HTH, THH, HTT, THT, TTH, TTT},
X can take the values 0, 1, 2, 3, with
1
P[X = 0] = P[TTT ] =
8
3
P[X = 1] = P[HTT, THT, TTH] =
8
3
P[X = 2] = P[HHT, HTH, THH] =
8
1
P[X = 3] = P [HHH] = .
8
Probability distribution of X, i.e. the number of heads when three coins are
tossed simultaneously is
X 0 1 2 3
p( x ) 1 3 3 1
8 8 8 8
9
Random Variables and Solution: As P[X < 0] = P[X = 0] = P[X > 0]
Expectation
P[X = 1] + P[X = 2] = P[X = 0] = P[X = 1] + P[X = 2]
p + p = P[X = 0] = p + p
[Letting P[X = 1] = P[X = 2] = P[X = 1] = P[X = 2] = p]
P[X = 0] = 2p.
Now, as P[X < 0] + P[X = 0] + P[X > 0] = 1,
P[X = 1] + P[X = 2] + P[X = 0] + P[X = 1] + P[X = 2] = 1
p + p + 2p + p + p = 1
1
6p = 1 p =
6
1 2
P[X = 0] = 2p = 2 = ,
6 6
1
P[X = 1] = P[X = 2] = P[X = 1] = P[X = 2] = p = .
6
Hence, the probability distribution of X is given by
X 2 1 0 1 2
p(x) 1 1 2 1 1
6 6 6 6 6
X 0 1 2 3
p( x ) 1 3 1 1
10 10 2 10
10
Let us define and explain a continuous random variable and its probability Random Variables
function in the next section.
A B
y = f(x)
D C
x x + x
Fig. 5.1
Now, if x is very-very small, then the curve AB will act as a line and hence
the shaded region will be a rectangle whose area will be AD DC i.e. f( x ) x
[ AD = the value of y at x i.e. f ( x ), DC = length x of the interval
( x, x + x )]
Also, this area = probability that X lies in the interval ( x, x + x )
= P[ x X x + x ]
11
Random Variables and Hence,
Expectation
P[ x X x + x ] = f (x) x
P[x X x x]
f x , where x is very-very small
x
P[x X x x]
lim f x.
x 0 x
f( x ), so defined, is called probability density function.
Probability density function has the same properties as that of probability mass
function. So, f( x ) 0 and sum of the probabilities of all possible values that
the random variable can take, has to be 1. But, here, as X is a continuous
random variable, the summation is made possible through ‘integration’ and
hence
f x dx 1 ,
R
where integral has been taken over the entire range R of values of X.
Remark 2
i) Summation and integration have the same meanings but in mathematics
there is still difference between the two and that is that the former is used
in case of discrete values, i.e. countable values and the latter is used in
continuous case.
ii) An essential property of a continuous random variable is that there is
zero probability that it takes any specified numerical value, but the
probability that it takes a value in specified intervals is non-zero and is
calculable as a definite integral of the probability density function of the
random variable and hence the probability that a continuous r.v. X will
lie between two values a and b is given by
b
P[a < X < b] = f x dx .
a
12
1 1 Random Variables
f x dx 1 Ax 3dx 1
0 0
1
x4 1
A 1 A 0 1 A 4
4 0 4
0.5 0.5 0.5
x4
(ii) P[0.2 < X < 0.5] = f x dx = Ax dx 4 = [(0.5)4 – (0.2)4]
3
1
3 0 1 3
Now, P X f x dx
4 3 2 4
4
3
lower limit is 4 and upper limit
is given in the problem which is1
1 1 4
x4 4 3 81 175
= 4x dx 4 = 1 1
3
, and
3 4 3 4 256 256
4
4
1
1 1 15
P X f x [x 4 ]11/ 2 1 .
2 1 16 16
2
3
P X
4 175 16 35 35
the required probability .
1 256 15 16 3 48
P X
2
Example 6: The p.d.f. of the different weights of a “1 litre pure ghee pack” of
a company is given by:
f(x) is p.d.f.
1.02 1.02
x2
Now, P[1.01 < X < 1.02] = 200 x 1 dx = 200 x
1.01 2 1.01
1.0404 1.0201
= 200 1.02 1.01
2 2
= 200 0.5202 1.02 0.51005 1.01
A
, 1500 x 2500
f x x3
0, elsewhere
14
Discrete Distribution Function Random Variables
= p1 + p2 + ... + pi .
x2 p1 p 2
x3 p1 p 2 p3
x4 p1 p 2 p3 p 4
. .
. .
. .
The value of F(x) corresponding to the last value of the random variable X is
always 1, as it is the sum of all the probabilities. F(x) remains 1 beyond this
last value of X also, as it being a probability can never exceed one.
For example, Let X be a random variable having the following probability
distribution:
X 0 1 2
p( x ) 1 1 1
4 2 4
Notice that p( x ) will be zero for other values of X. Then, Distribution function
of X is given by
X F( x ) = P[X x ]
0 1
4
1 1 1 3
4 2 4
2 1 1 1
1
4 2 4
15
Random Variables and Here, for the last value, i.e. for X = 2, we have F( x ) =1.
Expectation
Also, if we take a value beyond 2 say 4, then we get
F(4) = P[X 4]
= P[X = 4] + P[X = 3] + P[X 2]
= 0 + 0 + 1 = 1.
Example 7: A random variable X has the following probability function:
X 0 1 2 3 4 5 6 7
p( x ) 1 1 1 3 1 1 17
0
10 5 5 10 100 50 100
Determine the distribution function of X.
Solution: Here,
F(0) = P[X 0] = P[X = 0] = 0,
1 1
F(1) = P[X 1] = P[X = 0] + P [X = 1] = 0 + ,
10 10
1 1 3
F(2) = P[X 2] = P[X = 0] + P [X = 1] + [X = 2] = 0 + ,
10 5 10
and so on. Thus, the distribution function F( x ) of X is given in the following
table:
X F( x ) = P[X x ]
0 0
1 1
10
2 3
10
3 3 1 1
10 5 2
4 1 3 4
2 10 5
5 4 1 81
5 100 100
6 81 1 83
100 50 100
7 83 17
1
100 100
16
Continuous Distribution Function Random Variables
d
f(x) =
dx
F x
dF(x) = f(x) dx
x x
F x P[X x] f x dx = 6x 1 x dx
0 0
x x
x 2 x3
= 6 x x dx = 6 = 3x 2 2x 3
2
0 2 3 0
17
Random Variables and The c.d.f. of X is given by
Expectation
0, x0
2 3
F x 3x 2x , 0 x 1.
1, x 1
X 0 1 2 3 4 5 6 7 8
x
2, 0 x 1
1 , 1 x 2
f x 2 .
1
3 x , 2x 3
2
0, elsewhere.
5.6 SUMMARY
Following main points have been covered in this unit of the course:
1) A random variable is a function whose domain is a set of possible
outcomes and range is a sub-set of the set of reals and has the following
properties:
i) Each particular value of the random variable can be assigned some
probability.
ii) Sum of all the probabilities associated with all the different values of
18 the random variable is unity.
2) A random variable is said to be discrete random variable if it has either Random Variables
a finite number of values or a countable number of values, i.e. the values
can be arranged in a sequence.
3) If a random variable is such that its values cannot be arranged in a
sequence, it is called continuous random variable. So, a random
variable is said to be continuous if it can take all the possible real
(i.e. integer as well as fractional) values between two certain limits.
4) Let X be a discrete r.v. which take on the values x1 , x2, ... and let
P X x i = p(xi). The function p(xi) is called probability mass
function of X satisfying p(xi) 0 and p x 1 . The set
i
i
discrete r.v. X.
5) Let X be a continuous random variable and f( x ) be a continuous
function of x . Suppose ( x , x + x ) be an interval of length x . Then
P[x X x x]
f( x ) defined by lim f x is called the probability
x 0 x
density function of X.
Probability density function has the same properties as that of probability
mass function i.e. f( x ) 0 and f x dx 1 , where integral has been
R
taken over the entire range R of values of X.
6) A function F defined for all values of a random variable X by
F( x ) = P[X x ] is called the distribution function. It is also known as
the cumulative distribution function (c.d.f.) of X. The domain of the
distribution function is a set of real numbers and its range is [0, 1].
Distribution function of a discrete random variable X is said to be
discrete distribution function and is given by
x1 , F x1 , x 2 , F x 2 ,... . Distribution function of a continuous
random variable X having the probability density function f( x ) is said to
be continuous distribution function and is given by
x
F( x ) = P[X x ] = f x dx .
5.7 SOLUTIONS/ANSWERS
E 1) Let X be the number of bad articles drawn.
X can take the values 0, 1, 2 with
P[X = 0] = P[No bad article]
= P[Drawing 2 articles from 5 good articles and zero article
from 2 bad articles]
19
Random Variables and 5
Expectation
C 2 2 C0 5 4 1 10
7
,
C2 76 21
P[X = 1] = P[One bad article and 1 good article]
2
C1 5 C1 2 5 2 10
7
, and
C2 7 6 21
P[X = 2] = P[Two bad articles and no good article]
2
C 2 5 C0 1 1 2 1
= 7
C2 76 21
E2) As Y = X2 + 2X,
For X = 0, Y = 0 + 0 = 0;
For X = 1, Y = 12 + 2(1) = 3;
For X = 2, Y = 22 + 2(2) = 8; and
For X = 3, Y = 32 + 2(3) = 15.
Thus, the values of Y are 0, 3, 8, 15 corresponding to the values
0, 1, 2, 3 of X and hence
1 3
P[Y = 0] = P[X = 0] = , P[Y = 3] = P[X = 1] = ,
10 10
1 1
P[Y = 8] = P[X = 2] = and P[Y = 15] = P[X = 3] = .
2 10
The probability distribution of Y is
Y 0 3 8 15
p(y) 1 3 1 1
10 10 2 10
20
3 3 3 27 Random Variables
=
7 7 7 343
P[X = 1] = P[One red and two white]
= P R1 W2 W3 or W1 R 2 W3 or W1 W2 R 3
= P R 1 W2 W3 P W1 R 2 W3 P W1 W2 R 3
= P[R 1 ]P W2 P W3 P W1 P R 2 P W3 P W1 P W2 P R 3
4 3 3 3 4 3 3 3 4 4 3 3 108
= = 3 = ,
7 7 7 7 7 7 7 7 7 7 7 7 343
P[X = 2] = P[Two red and one white]
P R1 R 2 W3 or R 1 W2 R 3 or W1 R 2 R 3
= P[R 1 ]P R 2 P W3 P R1 P W2 P R 3 P W1 P R 2 P R 3
4 4 3 4 3 4 3 4 4 4 4 3 144
= = 3 = .
7 7 7 7 7 7 7 7 7 7 7 7 343
P[X = 3] = P [Three red balls]
4 4 4 64
= P[R1 R2 R3] = P(R1) P(R2) P(R3) = .
7 7 7 343
Probability distribution of the number of red balls is
X 0 1 2 3
p( x ) 27 108 144 64
343 343 343 343
E4) As f( x ) is p.d.f.,
2500 2500 2500
A 3 x 2
3
dx 1 A x dx 1 A 1
1500
x 1500 2 1500
A 1 1 A 1 1
1 1
2 2
2 2500 1500 20000 625 225
A 9 25
1 16A = 5625 20000
20000 5625
5625 20000
A= = 5625 1250 = 7031250.
16
2000 2000
1
Now, P[1600 X 2000] = f x dx = A dx
1600 1600
x3
21
Random Variables and
A 1 1
2000
Expectation A 1
=
2 x 2 1600 2 2000 2 1600 2
A 1 1 A 16 25
= =
20000 400 256 20000 6400
9 7031250 2025
= =
20000 6400 4096
E5) i) As the given distribution is probability distribution,
sum of all the probabilities = 1
k + 3k + 5k + 7k + 9k + 11k + 13k + 15k + 17k = 1
1
81 k = 1 k =
81
ii) The distribution function of X is given in the following table:
X F( x ) = P[X x ]
0 1
k
81
1 4
k 3k 4k
81
2 9
4k 5k 9k
81
3 16
9k 7k 16k
81
4 25
16k 9k 25k
81
5 36
25k 11k 36k
81
6 49
36k 13k 49k
81
7 64
49k 15k 64k
81
8 64k 17k 81k 1
22
For 0 x < 1, Random Variables
x 0 x
F( x ) = P[X x ] =
f x dx = f x dx f x dx
0
[ 0 x <1]
x
x x
=0+ 2 dx f (x) 2 for 0 x 1
0
x
1 x2 x2
= .
2 2 0 4
For 1 x < 2,
x
F ( x ) = P [X x ] =
f x dx
0 1 x
f x dx f x dx f x dx
0 1
1 x
x 1 1 1 1
= 0 dx dx = x 2 [x]1x
0
2 1
2 4 0 2
1 x 1 1
= 2x 1
4 2 2 4
For 2 x < 3,
x
F x = f x dx
0 1 2 x
= f x dx f x dx f x dx f x dx
0 1 2
1 2 x
x 1 1
= 0 dx dx 3 x dx
0
2 1
2 2
2
1 x
x2 1 2 1 x2
= x 1 3x
4 0 2 2 2 2
1 1 1 x2
= 3x 6 2
4 2 2 2
1 x2 5 x 2 3x 5
= 3x =
2 2 4 4 2 4
For 3 x < ,
x
F x = f x dx
23
Random Variables and 0 1 2 3 x
Expectation = f x dx f x dx f x dx f x dx f x dx
0 1 2 3
1 2 3 x
x 1 1
= 0 dx dx 3 x dx 0 dx
0
2 1
2 2
2 3
1 2 3
x2 x 1 x2
= 3x 0
4 0 2 1 2 2 2
1 1 1 9
= 1 9 6 2
4 2 2 2
1 1 19
= 4
4 2 22
1 1 1
= 1
4 2 4
Hence, the distribution function is given by:
0, x 0
x2
, 0 x 1
4
2x 1
Fx , 1 x 2
4
x 2 3x 5
, 2 x3
4 2 4
1, 3 x
24
Bivariate Discrete Random
UNIT 6 BIVARIATE DISCRETE RANDOM Variables
VARIABLES
Structure
6.1 Introduction
Objectives
6.1 INTRODUCTION
In Unit 5, you have studied one-dimensional random variables and their
probability mass functions, density functions and distribution functions. There
may also be situations where we have to study two-dimensional random
variables in connection with a random experiment. For example, we may be
interested in recording the number of boys and girls born in a hospital on a
particular day. Here, ‘the number of boys’ and ‘the number of girls’ are
random variables taking the values 0, 1, 2, … and both these random variables
are discrete also.
In this unit, we concentrate on the two-dimensional discrete random variables
defining them in Sec. 6.2. The joint, marginal and conditional probability mass
functions of two-dimensional random variable are described in Sec. 6.3. The
distribution function and the marginal distribution function are discussed in
Sec. 6.4.
Objectives
A study of this unit would enable you to:
define two-dimensional discrete random variable;
specify the joint probability mass function of two discrete random
variables;
obtain the marginal and conditional distributions for two-dimensional
discrete random variable;
define two-dimensional distribution function;
define the marginal distribution functions; and
solve various practical problems on bivariate discrete random variables.
25
Random Variables and
Expectation 6.2 BIVARIATE DISCRETE RANDOM
VARIABLES
In Unit 5, the concept of single-dimensional random variable has been studied
in detail. Proceeding in analogy with the one-dimensional case, concept of
two-dimensional discrete random variables is discussed in the present unit.
A situation where two-dimensional discrete random variable needs to be
studied has already been given in Sec. 6.1 of this unit. To describe such
situations mathematically, the study of two random variables is introduced.
Definition: Let X and Y be two discrete random variables defined on the
sample space S of a random experiment then the function (X, Y) defined on
the same sample space is called a two-dimensional discrete random variable. In
others words, (X, Y) is a two-dimensional random variable if the possible
values of (X, Y) are finite or countably infinite. Here, each value of X and Y is
represented as a point ( x, y ) in the xy-plane.
As an illustration, let us consider the following example:
Let three balls b1, b2, b3 be placed randomly in three cells. The possible
outcomes of placing the three balls in three cells are shown in Table 6.1.
Table 6.1 : Possible Outcomes of Placing the Three Balls in Three Cells
Arrangement Placement of the Balls in
Number
Cell 1 Cell 2 Cell 3
1 b1 b2 b3
2 b1 b3 b2
3 b2 b1 b3
4 b2 b3 b1
5 b3 b1 b2
6 b3 b2 b1
7 b1,b2 b3 -
8 b1,b2 - b3
9 - b1,b2 b3
10 b1,b3 b2 -
11 b1,b3 - b2
12 - b1,b3 b2
13 b2,b3 b1 -
14 b2,b3 - b1
15 - b2,b3 b1
16 b1 b2,b3 -
17 b1 - b2,b3
18 - b1 b2,b3
26
19 b2 b3,b1 - Bivariate Discrete Random
Variables
20 b2 - b3,b1
21 - b2 b3,b1
22 b3 b1,b2 -
23 b3 - b1,b2
24 - b3 b1,b2
25 b1,b2,b3 - -
26 - b1,b2,b3 -
27 - - b1,b2,b3
Now, let X denote the number of balls in Cell 1 and Y be the number of cells
occupied. Notice that X and Y are discrete random variables where X take on
the values 0, 1, 2, 3 ( number of balls in Cell 1 may be 0 or 1 or 2 or 3) and
Y take on the values 1, 2, 3 ( number of occupied cells may be 1 or 2 or 3).
The possible values of two-dimensional random variable (X, Y), therefore, are
all ordered pairs of the values x and y of X and Y, respectively, i.e. are (0, 1),
(0, 2), (0, 3), (1, 1), (1, 2), (1, 3), (2, 1), (2, 2), (2, 3), (3, 1), (3, 2), (3, 3).
Now, to each possible value xi , y j of (X, Y), we can associate a number
p x i , y j representing P X x i , Y y j as discussed in the following section
of this unit.
27
Random Variables and
Expectation p(1, 2) = P[X = 1, Y = 2] = P[one ball in Cell 1 and 2 cells occupied]
= P[Arrangement numbers 16, 17, 19, 20, 22, 23]
6
=
27
p(1, 3) = P[X = 1, Y = 3] = P[one ball in Cell 1 and 3 cells occupied]
6
= P[Arrangement numbers1 to 6] =
27
p(2, 1) = P[X = 2, Y = 1] = P[two balls in Cell 1 and 1 cell occupied]
= P[Impossible event] = 0
p(2, 2) = P[X = 2, Y = 2] = P[two balls in Cell 1 and 2 cells occupied]
= P[Arrangement numbers 7, 8, 10, 11, 13, 14]
6
=
27
p(2, 3) = P[X = 2, Y = 3] = P[two balls in Cell 1 and 3 cells occupied]
= P[Impossible event] = 0
p(3, 1) = P[X = 3, Y = 1] = P[three balls in Cell 1 and 1 cell occupied]
1
= P[Arrangement number 25] =
27
p(3, 2) = P[X = 3, Y = 2] = P[three balls in Cell 1 and 2 cells occupied]
= P[Impossible event] = 0
p(3, 3) = P[X = 3, Y = 3] = P[three balls in Cell 1 and 3 cells occupied]
= P[Impossible event] = 0
The values of (X, Y) together with the number associated as above constitute
what is known as joint probability distribution of (X, Y) which can be written
in the tabular form also as shown below:
Y 1 2 3 Total
X
0 2 / 27 6 / 27 0 8 / 27
1 0 6 / 27 6 / 27 12 / 27
2 0 6 / 27 0 6 / 27
3 1 / 27 0 0 1 / 27
Total 3 / 27 18 / 27 6 / 27 1
28
We are now in a position to define joint, marginal and conditional probability Bivariate Discrete Random
mass functions. Variables
(i) p xi , y j 0
(ii) p x , y 1
i j
i j
p x i P X xi
= P X x i Y y1 or X x i Y y 2 or...
P X x i Y y1 P X x i Y y 2 P X x i Y y 3 ...
= P X x
j
i Y y j
p y j P Y y j
= P X x1 Y y j P X x 2 Y y j ...
P X x i Y y j
i
= p xi , y j
i
p x y P[X x Y y]
P X x Y y
= , provided P[Y = y] 0
P Y y
P A B
P A B , P B 0
P B
Similarly, the conditional probability mass function of Y, given X = x, is
defined as
P Y y X x
p y x P Y y X x
P X x
30
6 Bivariate Discrete Random
P X 0 Y 2 1 Variables
P X 0 Y 2 27
P Y 2 18 3
27
6
P X 1 Y 2 1
P X 1 Y 2 27
P Y 2 18 3
27
6
P X 2 Y 2 1
P X 2 Y 2 27
P Y 2 18 3
27
P X 3 Y 2 0
P X 3 Y 2 0
P Y 2 18
27
[Note that values of numerator and denominator in the above expressions have
already been obtained while discussing the joint and marginal probability mass
functions in this section of the unit.]
Independence of Random Variables
Two discrete random variables X and Y are said to be independent if and only
if
P X x i Y y j P X x i P Y y j
[ two events A and B are independent if and only if P(A B) = P(A) P(B)]
Example 1: The following table represents the joint probability distribution of
the discrete random variable (X, Y):
Y 1 2
X
1 0.1 0.2
2 0.1 0.3
3 0.2 0.1
Find :
i) The marginal distributions.
ii) The conditional distribution of X given Y = 1.
iii) P[(X + Y) < 4].
Solution:
i) To find the marginal distributions, we have to find the marginal totals, i.e.
row totals and column totals as shown in the following table:
31
Random Variables and
Expectation
Y 1 2 px
X
(Totals)
1 0.1 0.2 0.3
2 0.1 0.3 0.4
3 0.2 0.1 0.3
p y 0.4 0.6 1
(Totals)
P X 1, Y 1 0.1 1
ii) As P X 1 Y 1 ,
P Y 1 0.4 4
P X 2, Y 1 0.1 1
P X 2 Y 1 and
P Y 1 0.4 4
P X 3 Y 1 0.2 1
P X 3 Y 1 ,
P Y 1 0.4 2
(iii) As the values of (X, Y) which satisfy X + Y < 4 are (1, 1), (1, 2) and (2, 1)
only.
P X Y 4 P X 1, Y 1 P X 1, Y 2 P X 2, Y 1
Y 0 1 px
X
0 2/ 9 1/ 9 3/ 9
1 1/ 9 5/ 9 6/9
p y 3/ 9 6/9 1
3 6
P X 0 , P X 1 ,
9 9
3 6
P Y 0 , P Y 1
9 9
3 3 1
Now P X 0 P Y 0
9 9 9
2
But P X 0, Y 0
9
P X 0, Y 0 P X 0 P Y 0
33
Random Variables and
Expectation
E2) For the following joint probability distribution of (X, Y),
Y 1 2 3
X
1 1 / 20 1 / 10 1 / 10
2 1 / 20 1 / 10 1 / 10
3 1 / 10 1 / 10 1 / 20
4 1 / 10 1 / 10 1 / 20
P X x, Y y1 P X x, Y y 2 ...
P X x, Y y j
j
P X x1 , Y y P X x 2 , Y y ...
P X x i , Y y
i
34
Example 3: Considering the probability distribution function given in Bivariate Discrete Random
Example 1, find Variables
= P X 2, Y 2 P X 1, Y 2
P X 2, Y 2 P X 2, Y 1 P X 1, Y 2
P X 1, Y 1
P X 2, Y 2 P X 3, Y 2
ii) FX 3 P X 3
= P X 3, Y 1 P X 3, Y 2
= P X 3, Y 1 P X 2, Y 1 P X 1, Y 1
P X 3, Y 2 P X 2, Y 2 P X 1, Y 2
= P X 1, Y 1 P X 2, Y 1 P X 3, Y 1
= P X 1, Y 1 P X 2, Y 1 P X 3, Y 1
35
Random Variables and
2
Expectation F 0, 0 P X 0, Y 0 P X 0, Y 0
9
F 0,1 P X 0, Y 1 P X 0, Y 0 P X 0, Y 1
2 1 3
=
9 9 9
F 1,0 P X 1, Y 0 P X 1, Y 0 P X 0, Y 0
1 2 3
9 9 9
F 1,1 P X 1, Y 1 P X 1, Y 1 P X 1, Y 0
+ P X 0, Y 1 P X 0, Y 0
5 1 1 2
=1
9 9 9 9
Above distribution function F x, y can be shown in the tabular form as
follows:
Y 0 Y 1
X 0 2/ 9 3/ 9
X 1 3/ 9 1
FX 0 P X 0 P X 0
P X 0, Y 0 P X 0, Y 1
2 1 3
9 9 9
FX 1 P X 1 P X 1, Y 0 P X 1, Y 1
P X 1, Y 0 P X 0, Y 0
P X 1, Y 1 P X 0, Y 1
1 2 5 1
1
9 9 9 9
marginal distribution function of X is given as
X F(x)
0 3/ 9
1 1
Similarly, marginal distribution function of Y can be obtained. [Do it yourself]
36
Here is an exercise for you. Bivariate Discrete Random
Variables
E3) Obtain the joint and marginal distribution functions for the joint
probability distribution given in E 1).
Now before ending this unit, let us summarizes what we have covered in it.
6.5 SUMMARY
In this unit we have covered the following main points:
1) If X and Y be two discrete random variables defined on the sample space
S of a random experiment then the function (X, Y) defined on the same
sample space is called a two-dimensional discrete random variable. In
others words, (X, Y) is a two-dimensional random variable if the
possible values of (X, Y) are finite or countably infinite.
(i) p xi , y j 0
(ii) p x , y 1
i j
i j
P X x Y y
= ; and
P Y y
P Y y X x
P X x
37
Random Variables and
Expectation 6.6 SOLUTIONS/ANSWERS
E1) Let us compute the marginal totals. Thus, the complete table with
marginal totals is given as
Y 1 2 3 px
X
1 1 1 1 1 5
0 0
12 18 12 18 36
2 1 1 1 1 1 1 19
6 9 4 6 9 4 36
3 1 2 1 2 1
0 0
5 15 5 15 3
p y 1 14 79 1
4 45 180
Therefore,
i) Marginal distribution of X is
X px
1 5 / 36
2 19 / 36
3 1/ 3
P Y 1, X 2 1 36 6
ii) P Y 1 X 2
P X 2 6 19 19
P Y 2, X 2 1 36 4
P Y 2 X 2
P X 2 9 19 19
P Y 3, X 2 1 36 9
P Y 3 X 2
P X 2 4 19 19
1 6 / 19
2 4 / 19
3 9 / 19
38
iii) P X Y 5 Bivariate Discrete Random
Variables
=P X 1, Y 1 P X 1, Y 2 P X 1, Y 3
+P X = 2, Y = 1 P X 2, Y 2 P X 3, Y 1
1 1 1 1 15
0 0 .
12 18 6 9 36
E2) First compute the marginal totals, then you will be able to find
1
i) P X 4 , and hence
4
P Y 2, X 4 2
P Y 2 X 4
P X 4 5
2
ii) P Y 2
5
1 1 2
iii) P X 4, Y 2 , P X 4 , P Y 2
10 4 5
1 2 1
P[X = 4] P[Y = 2] = =
4 5 10
X= 4 and Y= 2 are independent
E3) To obtain joint distribution function F x, y P X x, Y y , we have
to obtain
F x, y for each value of X and Y, i.e. we have to obtain
X2 1 13 2
4 36 3
1 101
X3 1
4 180
39
Random Variables and
Expectation
Marginal distribution function of X is given as
X F(x)
1 5/36
2 2/3
3 1
40
Bivariate Continuous
UNIT 7 BIVARIATE CONTINUOUS Random Variables
RANDOM VARIABLES
Structure
7.1 Introduction
Objectives
7.2 Bivariate Continuous Random Variables
7.3 Joint and Marginal Distribution and Density Functions
7.4 Conditional Distribution and Density Functions
7.5 Stochastic Independence of Two Continuous Random Variables
7.6 Problems on Two-Dimensional Continuous Random Variables
7.7 Summary
7.8 Solutions/Answers
7.1 INTRODUCTION
In Unit 6, we have defined the bivariate discrete random variable (X, Y), where
X and Y both are discrete random variables. It may also happen that one of the
random variables is discrete and the other is continuous. However, in most
applications we deal only with the cases where either both random variables are
discrete or both are continuous. The cases where both random variables are
discrete have already been discussed in Unit 6. Here, in this unit, we are going
to discuss the cases where both random variables are continuous.
In Unit 6, you have studied the joint, marginal and conditional probability
functions and distribution functions in context of bivariate discrete random
variables. Similar functions, but in context of bivariate continuous random
variables, are discussed in this unit.
Bivariate continuous random variable is defined in Sec. 7.2. Joint and marginal
density functions are described in Sec. 7.3. Sec. 7.4 deals with the conditional
distribution and density functions. Independence of two continuous random
variables is dealt with in Sec. 7.5. Some practical problems on two-dimensional
continuous random variables are taken up in Sec. 7.6.
Objectives
A study of this unit would enable you to:
define two-dimensional continuous random variable;
specify the joint and marginal probability density functions of two
continuous random variables;
obtain the conditional density and distribution functions for two-
dimensional continuous random variable;
check the independence of two continuous random variables; and
solve various practical problems on bivariate continuous random
variables.
41
Random Variables and 7.2 BIVARIATE CONTINUOUS RANDOM
Expectation
VARIABLES
Definition: If X and Y are continuous random variables defined on the sample
space S of a random experiment, then (X, Y) defined on the same sample space
S is called bivariate continuous random variable if (X, Y) assigns a point in
xy-plane defined on the sample space S. Notice that it (unlike discrete random
variable) assumes values in some non-countable set. Some examples of
bivariate continuous random variable are:
1. A gun is aimed at a certain point (say origin of the coordinate system).
Because of the random factors, suppose the actual hit point is any point
(X, Y) in a circle of radius unity about the origin.
1 X
-1
-1
y=d
d
x=a x=b
c
y=c
X
a b
Fig.7.2: (X, Y) Assuming All Values in the Rectangle x,y : a x b,c y d
3. In a statistical survey, let X denotes the daily number of hours a child
watches television and Y denotes the number of hours he/she spends on the
studies. Here, (X, Y) is a two-dimensional continuous random variable.
42
Bivariate Continuous
7.3 JOINT AND MARGINAL DISTRIBUTION AND Random Variables
DENSITY FUNCTIONS
Two-Dimensional Continuous Distribution Function
The distribution function of a two-dimensional continuous random variable
(X, Y) is a real-valued function and is defined as
F x, y P X x, Y y for all real x and y.
F x, y f x, y dydx
Remark 2:
As in the one-dimensional case, f x, y does not represent the probability of
anything. However, for positive x and y sufficiently small, f x, y xy is
approximately equal to
P x X x x, y Y y y .
In the one-dimensional case, you have studied that for positive x sufficiently
small f(x)x is approximately equal to P x X x x . So, the two-
dimensional case is in analogy with the one-dimensional case.
Remark 3:
In analogy with the one-dimensional case [See Sec. 5.4 of Unit 5 of this course],
P x X x x, y Y y y
f x, y can be written as lim
x 0
y 0
xy
and is equal to
2
xy
F x, y , i.e. second order partial derivative with respect to x and y.
43
d
Random Variables and
Expectation
[See Sec. 5.5 of Unit 5 where f x
dx
F x ]
2
Note:
xy
F x, y means first differentiate F x, y partially w.r.t. y and
then the resulting function w.r.t. x. When we differentiate a function partially
w.r.t. one variable, then the other variable is treated as constant
For example, Let F x, y xy3 x 2 y
44
Bivariate Continuous
f x f x, y dy, Random Variables
d
or, it may also be obtained as
dx
F x ,
and the marginal probability density function of Y is given as
f y f x, y dx
or
d
=
dy
F y
i) f y x is clearly 0
f x, y
ii) f y x dy f x dy
1
f x, y dy
f x
1 f x, y dyis the m arg inal
f x
f (x)
probability density function of X
=1
45
Random Variables and Similarly, f x y satisfies
Expectation
i) f x y 0 and
ii) f x y dx 1
ii) f x y f x .
f x, y f x f y x [On cross-multiplying]
k 2x y , 0 x 1, 0 y 2
f x, y
0, elsewhere
1 2
f x, y dy dx 1
0 0
[0 x 1, 0 y 2 ]
1 2
k 2x y dy dx 1
0 0
1 2
k 2x y dy dx 1
0 0
1 2
y2
k 2xy dx 1
0
2 0
1 2
k 2x 2
2
0 dx 1
0
2
1
k 4x 2 dx 1
0
1
4x 2
k 2x 1
2 0
4 1
k 2 0 1 4k 1 k =
2 4
47
Random Variables and
Example 2: Let the joint density function of a two-dimensional random
Expectation variable (X, Y) be:
x y for 0 x 1 and 0 y 1
f x, y
0, otherwise
1
f x, y dy [ 0 y < 1]
0
1
= x y dy
0
1
y2
xy .
2 0
2
1 1
x 1 0 x , 0 x 1.
2 2
1 1 2 4 2 4 2
y2 4
i) P X Y f x, y dy dx 8xy dy dx 0 2 dx
8x
2 4 0 0 0 0
0
48
1 1 1 Bivariate Continuous
2 1 2
x 1 x2 2 Random Variables
8x dx 0 4 dx
0 16 2 4 2 0
1 1 1
.
4 8 32
1 1
y2
8xy dy 8x
x 2 x
1 x2
8x 4x 1 x 2 for 0 x 1
2 2
Marginal density function of Y is
y
f y f x, y dx [0 x y ]
0
8xy dx
0
y
x2 8y 3
8y 4y 3 for 0 y 1
2 0 2
8xy 2x
, 0x y
4y 3 y 2
Conditional density function of Y given X(0 < X < 1) is
f x, y
f y x
f x
8xy 2y
, x < y <1
4x 1 x 2 1 x2
iii) f x, y 8xy,
But f x f y 4x 1 x 2 4y3
16x 1 x y
2 3
49
Random Variables and f x, y f x f y
Expectation
Hence, X and Y are not independent random variables.
Now, you can try some exercises.
E1) Let X and Y be two random variables. Then for
kxy for 0 x 4 and 1 y 5
f x, y
0, otherwise
to be a joint density function, what must be the value of k?
E2) If the joint p.d.f. of a two-dimensional random variable (X, Y) is given by
2 for 0 x 1 and 0 y x
f x, y
0, otherwise,
Then,
i) Find the marginal density functions of X and Y.
ii) Find the conditional density functions.
iii) Check for independence of X and Y.
E3) If (X, Y) be two-dimensional random variable having joint density
function.
1
6 x y ; 0 x 2 , 2 y 4
f x, y 8
0, elsewhere
Now before ending this unit, let’s summarize what we have covered in it.
7.7 SUMMARY
In this unit, we have covered the following main points:
1) If X and Y are continuous random variables defined on the sample space S of
a random experiment, then (X, Y) defined on the same sample space S is
called bivariate continuous random variable if (X, Y) assigns a point in
xy -plane defined on the sample space S.
2) The distribution function of a two-dimensional continuous random variable
(X, Y) is a real-valued function and is defined as
F x, y P X x, Y y for all real x and y.
F x, y f x, y dydx
50
and satisfies Bivariate Continuous
Random Variables
i) f x, y 0
ii) f x, y dydx 1.
4) The marginal distribution function of the continuous random variable X is
defined as
x
F x P X x f x, y dy dx,
and that of continuous random variable Y is defined as
y
F y P Y y f x, y dx dy .
5) The marginal probability density function of X is given as
d
f x f x, y dy dx F(x) ,
51
Random Variables and 7.8 SOLUTIONS/ANSWERS
Expectation
E1) As f x, y is the joint probability density function,
f x, y dy dx 1
4 5 4 5
0 1 kxy dy dx 1 k 0 1 xy dy dx 1
4 5 4
y2
k x dx 1 k 12x dx 1
0
2 1 0
4
x2
12k 1 96 k = 1
2 0
1
k=
96
E2) i) Marginal density function of Y is given by
1
f y f x, y dx 2dx
y
[As x is involved in both the given ranges, i.e. 0 < x < 1 and 0 < y < x;
therefore, here we will combine both these intervals and hence have
0 < y < x < 1. x takes the values from y to 1]
1
2x y 2 2y
= 2 – 2y
= 2(1– y), 0 < y < 1
Marginal density function of X is given by
f x f x, y dy
x
2dy [ 0 < y < x < 1]
0
x
2 y 0
2x, 0 x 1.
ii) Conditional density function of Y given X(0 < X < 1) is
f x, y 2 1
f y x ; 0 y x
f x 2x x
f x f y 2 2x 1 y
As f x, y f x f y ,
1 3
E3) (i) P X 1, Y 3 f x, y dy dx
1 3
1
6 x y dy dx
0 2
8
3
1
1 y2
6y xy dx
8
0
2 2
1
1 9
6 3 x 3 12 2x 2dx
8 0 2
1
1 9
18 3x 10 2x dx
8 0 2
1 1
1 7 1 7 x2 1 7 1 3
x dx x
802 8 2 2 0 8 2 2 8
P X 1, Y 3
ii) P X 1 Y 3
P Y 3
2 3
1
where P Y 3 6 x y dy dx
0 2
8
2 3
1 y2
6y xy dx .
80 2 2
2
1 9
18 3x 12 2x 2 dx
8 0 2
2
1 9
18 3x 10 2x dx
8 0 2
2
1 7
x dx
8 0 2
2
1 7 x2
x
8 2 2 0
53
Random Variables and 1 4
7 0
Expectation 8 2
5
8
3 / 8 value of numerator is
P X 1 Y 3
5 / 8 already calculated in part(i)
3
5
54
Bivariate Continuous
UNIT 7 BIVARIATE CONTINUOUS Random Variables
RANDOM VARIABLES
Structure
7.1 Introduction
Objectives
7.2 Bivariate Continuous Random Variables
7.3 Joint and Marginal Distribution and Density Functions
7.4 Conditional Distribution and Density Functions
7.5 Stochastic Independence of Two Continuous Random Variables
7.6 Problems on Two-Dimensional Continuous Random Variables
7.7 Summary
7.8 Solutions/Answers
7.1 INTRODUCTION
In Unit 6, we have defined the bivariate discrete random variable (X, Y), where
X and Y both are discrete random variables. It may also happen that one of the
random variables is discrete and the other is continuous. However, in most
applications we deal only with the cases where either both random variables are
discrete or both are continuous. The cases where both random variables are
discrete have already been discussed in Unit 6. Here, in this unit, we are going
to discuss the cases where both random variables are continuous.
In Unit 6, you have studied the joint, marginal and conditional probability
functions and distribution functions in context of bivariate discrete random
variables. Similar functions, but in context of bivariate continuous random
variables, are discussed in this unit.
Bivariate continuous random variable is defined in Sec. 7.2. Joint and marginal
density functions are described in Sec. 7.3. Sec. 7.4 deals with the conditional
distribution and density functions. Independence of two continuous random
variables is dealt with in Sec. 7.5. Some practical problems on two-dimensional
continuous random variables are taken up in Sec. 7.6.
Objectives
A study of this unit would enable you to:
define two-dimensional continuous random variable;
specify the joint and marginal probability density functions of two
continuous random variables;
obtain the conditional density and distribution functions for two-
dimensional continuous random variable;
check the independence of two continuous random variables; and
solve various practical problems on bivariate continuous random
variables.
41
Random Variables and 7.2 BIVARIATE CONTINUOUS RANDOM
Expectation
VARIABLES
Definition: If X and Y are continuous random variables defined on the sample
space S of a random experiment, then (X, Y) defined on the same sample space
S is called bivariate continuous random variable if (X, Y) assigns a point in
xy-plane defined on the sample space S. Notice that it (unlike discrete random
variable) assumes values in some non-countable set. Some examples of
bivariate continuous random variable are:
1. A gun is aimed at a certain point (say origin of the coordinate system).
Because of the random factors, suppose the actual hit point is any point
(X, Y) in a circle of radius unity about the origin.
1 X
-1
-1
y=d
d
x=a x=b
c
y=c
X
a b
Fig.7.2: (X, Y) Assuming All Values in the Rectangle x,y : a x b,c y d
3. In a statistical survey, let X denotes the daily number of hours a child
watches television and Y denotes the number of hours he/she spends on the
studies. Here, (X, Y) is a two-dimensional continuous random variable.
42
Bivariate Continuous
7.3 JOINT AND MARGINAL DISTRIBUTION AND Random Variables
DENSITY FUNCTIONS
Two-Dimensional Continuous Distribution Function
The distribution function of a two-dimensional continuous random variable
(X, Y) is a real-valued function and is defined as
F x, y P X x, Y y for all real x and y.
F x, y f x, y dydx
Remark 2:
As in the one-dimensional case, f x, y does not represent the probability of
anything. However, for positive x and y sufficiently small, f x, y xy is
approximately equal to
P x X x x, y Y y y .
In the one-dimensional case, you have studied that for positive x sufficiently
small f(x)x is approximately equal to P x X x x . So, the two-
dimensional case is in analogy with the one-dimensional case.
Remark 3:
In analogy with the one-dimensional case [See Sec. 5.4 of Unit 5 of this course],
P x X x x, y Y y y
f x, y can be written as lim
x 0
y 0
xy
and is equal to
2
xy
F x, y , i.e. second order partial derivative with respect to x and y.
43
d
Random Variables and
Expectation
[See Sec. 5.5 of Unit 5 where f x
dx
F x ]
2
Note:
xy
F x, y means first differentiate F x, y partially w.r.t. y and
then the resulting function w.r.t. x. When we differentiate a function partially
w.r.t. one variable, then the other variable is treated as constant
For example, Let F x, y xy3 x 2 y
44
Bivariate Continuous
f x f x, y dy, Random Variables
d
or, it may also be obtained as
dx
F x ,
and the marginal probability density function of Y is given as
f y f x, y dx
or
d
=
dy
F y
i) f y x is clearly 0
f x, y
ii) f y x dy f x dy
1
f x, y dy
f x
1 f x, y dyis the m arg inal
f x
f (x)
probability density function of X
=1
45
Random Variables and Similarly, f x y satisfies
Expectation
i) f x y 0 and
ii) f x y dx 1
ii) f x y f x .
f x, y f x f y x [On cross-multiplying]
k 2x y , 0 x 1, 0 y 2
f x, y
0, elsewhere
1 2
f x, y dy dx 1
0 0
[0 x 1, 0 y 2 ]
1 2
k 2x y dy dx 1
0 0
1 2
k 2x y dy dx 1
0 0
1 2
y2
k 2xy dx 1
0
2 0
1 2
k 2x 2
2
0 dx 1
0
2
1
k 4x 2 dx 1
0
1
4x 2
k 2x 1
2 0
4 1
k 2 0 1 4k 1 k =
2 4
47
Random Variables and
Example 2: Let the joint density function of a two-dimensional random
Expectation variable (X, Y) be:
x y for 0 x 1 and 0 y 1
f x, y
0, otherwise
1
f x, y dy [ 0 y < 1]
0
1
= x y dy
0
1
y2
xy .
2 0
2
1 1
x 1 0 x , 0 x 1.
2 2
1 1 2 4 2 4 2
y2 4
i) P X Y f x, y dy dx 8xy dy dx 0 2 dx
8x
2 4 0 0 0 0
0
48
1 1 1 Bivariate Continuous
2 1 2
x 1 x2 2 Random Variables
8x dx 0 4 dx
0 16 2 4 2 0
1 1 1
.
4 8 32
1 1
y2
8xy dy 8x
x 2 x
1 x2
8x 4x 1 x 2 for 0 x 1
2 2
Marginal density function of Y is
y
f y f x, y dx [0 x y ]
0
8xy dx
0
y
x2 8y 3
8y 4y 3 for 0 y 1
2 0 2
8xy 2x
, 0x y
4y 3 y 2
Conditional density function of Y given X(0 < X < 1) is
f x, y
f y x
f x
8xy 2y
, x < y <1
4x 1 x 2 1 x2
iii) f x, y 8xy,
But f x f y 4x 1 x 2 4y3
16x 1 x y
2 3
49
Random Variables and f x, y f x f y
Expectation
Hence, X and Y are not independent random variables.
Now, you can try some exercises.
E1) Let X and Y be two random variables. Then for
kxy for 0 x 4 and 1 y 5
f x, y
0, otherwise
to be a joint density function, what must be the value of k?
E2) If the joint p.d.f. of a two-dimensional random variable (X, Y) is given by
2 for 0 x 1 and 0 y x
f x, y
0, otherwise,
Then,
i) Find the marginal density functions of X and Y.
ii) Find the conditional density functions.
iii) Check for independence of X and Y.
E3) If (X, Y) be two-dimensional random variable having joint density
function.
1
6 x y ; 0 x 2 , 2 y 4
f x, y 8
0, elsewhere
Now before ending this unit, let’s summarize what we have covered in it.
7.7 SUMMARY
In this unit, we have covered the following main points:
1) If X and Y are continuous random variables defined on the sample space S of
a random experiment, then (X, Y) defined on the same sample space S is
called bivariate continuous random variable if (X, Y) assigns a point in
xy -plane defined on the sample space S.
2) The distribution function of a two-dimensional continuous random variable
(X, Y) is a real-valued function and is defined as
F x, y P X x, Y y for all real x and y.
F x, y f x, y dydx
50
and satisfies Bivariate Continuous
Random Variables
i) f x, y 0
ii) f x, y dydx 1.
4) The marginal distribution function of the continuous random variable X is
defined as
x
F x P X x f x, y dy dx,
and that of continuous random variable Y is defined as
y
F y P Y y f x, y dx dy .
5) The marginal probability density function of X is given as
d
f x f x, y dy dx F(x) ,
51
Random Variables and 7.8 SOLUTIONS/ANSWERS
Expectation
E1) As f x, y is the joint probability density function,
f x, y dy dx 1
4 5 4 5
0 1 kxy dy dx 1 k 0 1 xy dy dx 1
4 5 4
y2
k x dx 1 k 12x dx 1
0
2 1 0
4
x2
12k 1 96 k = 1
2 0
1
k=
96
E2) i) Marginal density function of Y is given by
1
f y f x, y dx 2dx
y
[As x is involved in both the given ranges, i.e. 0 < x < 1 and 0 < y < x;
therefore, here we will combine both these intervals and hence have
0 < y < x < 1. x takes the values from y to 1]
1
2x y 2 2y
= 2 – 2y
= 2(1– y), 0 < y < 1
Marginal density function of X is given by
f x f x, y dy
x
2dy [ 0 < y < x < 1]
0
x
2 y 0
2x, 0 x 1.
ii) Conditional density function of Y given X(0 < X < 1) is
f x, y 2 1
f y x ; 0 y x
f x 2x x
f x f y 2 2x 1 y
As f x, y f x f y ,
1 3
E3) (i) P X 1, Y 3 f x, y dy dx
1 3
1
6 x y dy dx
0 2
8
3
1
1 y2
6y xy dx
8
0
2 2
1
1 9
6 3 x 3 12 2x 2dx
8 0 2
1
1 9
18 3x 10 2x dx
8 0 2
1 1
1 7 1 7 x2 1 7 1 3
x dx x
802 8 2 2 0 8 2 2 8
P X 1, Y 3
ii) P X 1 Y 3
P Y 3
2 3
1
where P Y 3 6 x y dy dx
0 2
8
2 3
1 y2
6y xy dx .
80 2 2
2
1 9
18 3x 12 2x 2 dx
8 0 2
2
1 9
18 3x 10 2x dx
8 0 2
2
1 7
x dx
8 0 2
2
1 7 x2
x
8 2 2 0
53
Random Variables and 1 4
7 0
Expectation 8 2
5
8
3 / 8 value of numerator is
P X 1 Y 3
5 / 8 already calculated in part(i)
3
5
54
UNIT 8 MATHEMATICAL EXPECTATION Mathematical Expectation
Structure
8.1 Introduction
Objectives
8.1 INTRODUCTION
In Units 1 to 4 of this course, you have studied probabilities of different events
in various situations. Concept of univariable random variable has been
introduced in Unit 5 whereas that of bivariate random variable in Units 6 and
7. Before studying the present unit, we advice you to go through the above
units.
You have studied the methods of finding mean, variance and other measures in
context of frequency distributions in MST-002 (Descriptive Statistics). Here, in
this unit we will discuss mean, variance and other measures in context of
probability distributions of random variables. Mean or Average value of a
random variable taken over all its possible values is called the expected value
or the expectation of the random variable. In the present unit, we discuss the
expectations of random variables and their properties.
In Secs. 8.2, 8.3 and 8.4, we deal with expectation and its properties. Addition
and multiplication laws of expectation have been discussed in Sec. 8.5.
Objectives
After studying this unit, you would be able to:
find the expected values of random variables;
establish the properties of expectation;
obtain various measures for probability distributions; and
apply laws of addition and multiplication of expectation at appropriate
situations.
55
Random Variables and n
Expectation
f x
i 1
i i
Mean = n
.
fi
i 1
f x
i 1
i i
f1x1 f 2 x 2 ... f n x n
Mean = n
= n
fi
i 1
f i 1
i
x1f1 x 2 f2 xn fn
= n
n
... n
fi
i 1
fi
i 1
f
i 1
i
f f f
= x1 n i x2 n 2 ... x n n n
f f f
i i i
i 1 i 1 i1
f1 f2 fn
Notice that n
, n
,..., n
are, in fact, the relative frequencies or the
f f
i 1
i
i 1
i f
i 1
i
56
n
Mathematical Expectation
x f
i 1
i i
Mean of a frequency distribution of X is n
, similarly mean of a
fi
i 1
n
x p
i 1
i i
probability distribution of r.v. X is n
.
p
i 1
i
n
Now, as we know that p
i 1
i 1 for a probability distribution, therefore
n
the mean of the probability distribution becomes x i pi .
i 1
n
Expected value of a random variable X is E X x i pi .
i 1
The above formula for finding the expected value of a random variable X
is used only if X is a discrete random variable which takes the values
x1 , x 2 , ..., x n with probability mass function
p x i P X x i , i 1, 2,..., n.
= x1p1 x 2 p 2 x 3p 3
57
Random Variables and
1 1 1 1 1
Expectation = 0 1 (2) = 0 1
4 2 4 2 2
So, we get the same answer, i.e. 1 using the formula also.
So, expectation of a random variable is nothing but the average (mean)
taken over all the possible values of the random variable or it is the value
which we get on an average when a random experiment is performed
repeatedly.
Remark 1: Sometimes summations and integrals as considered in the
above definitions may not be convergent and hence expectations in such
cases do not exist. But we will deal only those summations (series) and
integrals which are convergent as the topic regarding checking the
convergence of series or integrals is out of the scope of this course. You
need not to bother as to whether the series or integral is convergent or
not, i.e. as to whether the expectation exists or not as we are dealing with
only those expectations which exist.
Example 1: If it rains, a rain coat dealer can earn Rs 500 per day. If it is a dry
day, he can lose Rs 100 per day. What is his expectation, if the probability of
rain is 0.4?
Solution: Let X be the amount earned on a day by the dealer. Therefore, X can
take the values Rs 500, Rs 100 ( loss of Rs 100 is equivalent to negative of
the earning of Rs100).
Probability distribution of X is given as
Rainy Day Dry day
X in Rs. : 500 100
px : 0.4 0.6
Thus, his expectation is Rs 140, i.e. on an overage he earns Rs 140 per day.
1 2 1
P 2 heads , P one head , P no head .
4 4 4
Let X be the amount in rupees won by him
X can take the values 5, 2 and 1 with
58
1 Mathematical Expectation
P X 5 P 2heads ,
4
2
P X 2 P 1Head , and
4
1
P X 1 P no Head .
4
Probability distribution of X is
X: 5 2 1
1 2 1
px
4 4 4
Expected value of X is given as
3
E X x i pi = x1p1 x 2 p 2 x 3p 3
i 1
1 2 1 5 4 1 10
= 5 2 1 = 2.5.
4 4 4 4 4 4 4
Thus, the expected value of amount won by him is Rs 2.5.
Example 3: Find the expectation of the number on an unbiased die when thrown.
Solution: Let X be a random variable representing the number on a die when thrown.
X can take the values 1, 2, 3, 4, 5, 6 with
1
P X 1 P X 2 P X 3 P X 4 P X 5 P X 6 .
6
Thus, the probability distribution of X is given by
X: 1 2 3 4 5 6
1 1 1 1 1 1
px :
6 6 6 6 6 6
Hence, the expectation of number on the die when thrown is
6
1 1 1 1 1 1 21 7
E X x i pi 1 2 3 4 5 6 =
i 1 6 6 6 6 6 6 6 2
Example 4: Two cards are drawn successively with replacement from a
well shuffled pack of 52 cards. Find the expected value for the number of
aces.
Solution: Let A1, A2 be the events of getting ace in first and second draws,
respectively. Let X be the number of aces drawn. Thus, X can take the
values 0, 1, 2 with
P X 0 P no ace P A1 A 2
59
Random Variables and 48 48 12 12 144
Expectation = = ,
52 52 13 13 169
P X 1 one Ace and one other card
P A1 A 2 A1 A 2
4 4 1
P A1 P A 2 = .
52 52 169
3 16 16 3 16
1
4 3 4 4 12
Now, you can try the following exercises.
E1) You toss a fair coin. If the outcome is head, you win Rs 100; if the
outcome is tail, you win nothing. What is the expected amount won
by you?
60
E2) A fair coin is tossed until a tail appears. What is the expectation of Mathematical Expectation
number of tosses?
E3) The distribution of a continuous random variable X is defined by
x3 , 0 x 1
3
f x 2 x , 1 x 2
0 , elsewhere
Obtain the expected value of X.
= k pi
i
= k x i pi
i
k E X
3. E a X b ax i b p i [By def.]
i
= ax p
i
i i bp i ax i p i bpi a x i pi b pi
i i i i
aE X b 1 aE X b
61
Random Variables and Continuous Case:
Expectation
Let X be continuous random variable having f(x) as its probability density
function. Thus,
1. E k kf x dx [By def.]
k f x dx
k xf x dx kE X
3. E aX b ax b f x dx ax f x dx b f x dx
a x f x dx b f x dx aE X b 1 = aE X b
Find i) E(X)
ii) E(2X + 3)
iii) E(X2)
iv) E(4X – 5)
Solution
5
i) E X x i pi = x1p1 x 2 p 2 x 3 p3 x 4 p 4 x 5p 5
i 1
Let us now express the moments and other measures for a random variable
in terms of expectations in the following section.
f i 1
i
So, the rth order moment about any point ‘A’ of a random variable X
having probability mass function P X x i p x i pi is defined as
n
r
p x i i A
r' i 1
n
p i 1
i
Variance
Variance of a random variable X is second order central moment and is
defined as
2 2
2 = V X E X E X E X
Also, we know that
2
V X 2 ' 1 '
64
Mathematical Expectation
2
a 2 E X E X [Using property 2 of section 8.3]
Cor. (i) V aX a 2 V X
(ii) V(b) = 0
(iii) V(X + b) = V(X)
f xi i x y i y
Cov X, Y i
f
i
i
where p i j = P X x i , Y y j
65
Random Variables and 2
Expectation Proof: V X Y E X Y E X Y
2
E X Y E X E Y
2
E X E X Y E Y
2 2
E X E X Y E Y 2 X E X Y E Y
2 2
E X E X E Y E Y 2E X E X Y E Y
V X V Y 2Cov X, Y
V X V Y
V aX bY a 2 V X b 2 V Y .
f
i 1
i xi x
n
, and
f i 1
i
p i x mean n
i 1
n
p i x mean
p
i 1
i
i 1
66
pi x Mean for discrete r.v Mathematical Expectation
x Mean f x dx for continuous r.v
Note: Other measures as defined for frequency distributions in MST-002
can be defined for probability distributions also and hence can be
expressed in terms of the expectations in the manner as the moments;
variance and covariance have been defined in this section of the Unit.
Example 7: Considering the probability distribution given in Example 6, obtain
i) V(X)
ii) V(2X + 3).
Solution:
2
(i) V X E X 2 E X
Solution: V(3X + 4Y) = (3)2 V(X) + (4)2 V(Y) [By Remark 3 of Section 8.4]
= 9(2) + 16( 3) = 18 + 48 = 66
67
Random Variables and
Expectation
Addition Theorem of Expectation
Theorem 8.2: If X and Y are random variables, then E X Y E X E Y
Proof:
Discrete case:
Let (X, Y) be a discrete two-dimensional random variable which takes up
the values (xi, yj) with the joint probability mass function
pij = P X x i Y y j .
p ij
j
p j p y j P Y y j = p ij
i
E X x i pi , E Y y jp j and E X Y x i y j p ij
i j i j
Now E X Y x i y j p ij
i j
x i pij y jpij
i j i j
= x p y p
i
i
j
ij
j
j
i
ij
[ in the first term of the right hand side, xi is free from j and hence can
be taken outside the summation over j; and in second term of the right
hand side, yj is free from i and hence can be taken outside the summation
over i.]
E X Y x i pi y j p j = E X E Y
i j
Continuous Case:
Let (X, Y) be a bivariate continuous random variable with probability
density function f x, y . Let f x and f y be the marginal
probability density functions of random variables X and Y respectively.
68
Mathematical Expectation
E X x f x dx, E Y y f y dy,
and E X Y x y f x, y dy dx .
Now, E X Y x y f x, y dy dx
x f x, y dy dx y f x, y dy dx
x
f x, y dy dx y f x, y dx dy
[ in the first term of R.H.S., x is free from the integral w.r.t. y and
hence can be taken outside this integral. Similarly, in the second term of
R.H.S, y is free from the integral w.r.t. x and hence can be taken outside
this integral.]
Refer to the definition of marginal density
x f x dx y f y dy function given in Unit 7 of this course
E X E Y
Remark 3: The result can be similarly extended for more than two random
variables.
Multiplication Theorem of Expectation
Theorem 8.3: If X and Y are independent random variables, then
E(XY) = E(X) E(Y)
Proof:
Discrete Case:
Let (X, Y) be a two-dimensional discrete random variable which takes up the
values x i , y j with the joint probability mass function
pij P X x i Y y j . Let pi and p j' be the marginal probability mass
functions of X and Y respectively.
E X x i pi , E Y y jp j' , and
i j
E XY x i y j p ij
i j
69
Random Variables and
Expectation if events A and B are independent,
= P X x i P Y y j
then P A B P A P B
p i p j'
Hence, E(XY) = x y p p
i j
i j i j
'
= x i y jpi p j'
i j
x i p i y j p j'
i j
and E XY xy f x, y dy dx .
Now E XY xy f x, y dy dx
X and Y are independent, f(x,y)=f(x)f(y)
xy f x f y dy dx
(see Unit 7 of this course)
x f x yf y dy dx
x f x dx y f y dy
E X E Y
Remark 4: The result can be similarly extended for more than two
random variables.
Example 8: Two unbiased dice are thrown. Find the expected value of
the sum of number of points on them.
Solution: Let X be the number obtained on the first die and Y be the
number obtained on the second die, then
70
7 7 Mathematical Expectation
E X and E Y [See Example 3 given in Section 8.2]
2 2
The required expected value = E(X + Y)
Using addition theorem
= E(X) + E(Y)
of expectation
7 7
= =7
2 2
Remark 5: This example can also be done considering one random
variable only as follows:
Let X be the random variable denoting “the sum of numbers of points on
the dice”, then the probability distribution in this case is
X: 2 3 4 5 6 7 8 9 10 11 12
1 2 3 4 5 6 5 4 3 2 1
p(x) :
36 36 36 36 36 36 36 36 36 36 36
1 2 1
and hence E(X) = 2 3 ... 12 =7
36 36 36
Example 9: Two cards are drawn one by one with replacement from 8
cards numbered from 1 to 8. Find the expectation of the product of the
numbers on the drawn cards.
Solution: Let X be the number on the first card and Y be the number on
the second card. Then probability distribution of X is
X 1 2 3 4 5 6 7 8
px 1 1 1 1 1 1 1 1
8 8 8 8 8 8 8 8
p y 1 1 1 1 1 1 1 1
8 8 8 8 8 8 8 8
1 1 1
E X E Y 1 2 ... 8
8 8 8
1 1 9
1 2 3 4 5 6 7 8 36
8 8 2
Thus, the required expected value is
E XY E X E Y [Using multiplication theorem of expectation]
9 9 81
.
2 2 4
71
Random Variables and
Expectation
Expectation of Linear Combination of Random Variables
Theorem 8.4: Let X1 , X 2 , ..., X n be any n random variables and if
a1 , a 2 , ..., a n are any n constants, then
[Note : Here a1X1 a 2 X 2 ... a n X n is a linear combination of X1, X2, ... , Xn]
= a1E X1 a 2 E X 2 ... a n E X n .
Now before ending this unit, let’s summarize what we have covered in it.
8.6 SUMMARY
The following main points have been covered in this unit:
1) Expected value of a random variable X is defined as
n
E X x i pi , if X is a discrete random variable
i 1
xf x dx , if X is a continuous random variable.
72
vi) If X1 , X 2 , ..., X n be any n random variables and if a1 , a 2 , ..., a n are any n Mathematical Expectation
constants, then
E a1X1 a 2 X 2 ... a n X n a1E X1 a 2 E X 2 ... a n E X n .
pi x i A r , if X is a discrete r.v.
i
'
r
x A r f x dx, if X is a continous r.v
= E(X A)r
ii) Variance of a random variable X is given as
2 2
V X E X E X E X
E X E X Y E Y
8.7 SOLUTIONS/ANSWERS
E1) Let X be the amount (in rupees) won by you.
1
X can take the values 100, 0 with P[X = 100] = P[Head] = , and
2
1
P[X = 0] = P[Tail] = .
2
probability distribution of X is
X: 100 0
1 1
px
2 2
73
Random Variables and and hence the expected amount won by you is
Expectation
1 1
E X 100 0 = 50.
2 2
E2) Let X be the number of tosses till tail turns up.
X can take values 1, 2, 3, 4… with
1
P[X = 1] = P[Tail in the first toss] =
2
2
1 1 1
P[X = 2] = P[Head in the first and tail in the second toss] = ,
2 2 2
3
1 1 1 1
P[X = 3] = P[HHT] = , and so on.
2 2 2 2
Probability distribution of X is
X: 1 2 3 4 5...
2 3 4 5
1 1 1 1 1
px ...
2 2 2 2 2
and hence
2 3 4
1 1 1 1
E X 1 2 3 4 ... … (1)
2 2 2 2
1
Multiplying both sides by , we get
2
2 3 4 5
1 1 1 1 1
E X 2 3 4 ...
2 2 2 2 2
2 3 4
1 1 1 1
E X 2 3 ... … (2)
2 2 2 2
[Shifting the position one step towards right so that we get the
terms having same power at the same positions as that in (1)]
Now, subtracting (2) from (1), we have
2 3 4
1 1 1 1 1
E X E X ...
2 2 2 2 2
2 3 4
1 1 1 1 1
E X
2 2 2 2 2
2 3
1 1 1
E X 1 ...
2 2 2
74
(Which is an infinite G.P. with first term a = 1 and common ratio Mathematical Expectation
1
r= )
2
1 a
[ S (see Unit 3of course MST - 001)]
1 1 r
1
2
1
= 2.
1
2
E3) E X x f x dx
0 1 2
x f x dx x f x dx x f x dx x f x dx
0 1 2
0 1 2
3
x 0 dx x x dx x 2 x
3
dx x 0 dx
0 1 2
1 2
0 x 4 dx x 8 x 3 6x 2 x dx 0
0 1
1 2
x 4dx 8x x 4 12x 2 6x 3 dx
0 1
1 2
x5 x 2 x5 x3 x4
8 12 6
5 0 2 5 3 4 1
5 2 5 3 4 2 5 3 4
1 32 1 3
16 32 24 4 4
5 5 5 2
1 8 13 1 3 1
.
5 5 10 5 10 2
E4) As X is a random variable with mean ,
E(X) = ... (1)
75
Random Variables and
X
Expectation Now, E Z E
1
E X
1
E X [Using Property 2 of Sec. 8.3]
1
E X [Using Property 3 of Sec. 8.3]
1
[Using (1)]
=0
Note: Mean of standard random variable is zero.
X
E5) Variance of standard random variable Z = is given as
X X
V(Z) = V =V
1
V X
2
1 Using the result of the Theorem 8.1
V X of Sec. 8.5 of this unit
1
= VX
2
1 it is given that the standard deviation
2
2 = 1 2
of X is and hence its variance is
Note: The mean of standard random variate is ‘0’ [See (E4)] and its
variance is 1.
E6) Given that E Y 0 E(a X b) = 0 a E(X) – b = 0
a(10) – b = 0
10 a – b = 0 ... (1)
Also as V(Y) = 1,
hence V(aX b) = 1
1
a2V(X) = 1 a2(25) = 1 a2 =
25
1
a= [ a is positive]
5
From (1), we have
1
10 b 0 2 – b = 0 b = 2
5
76
1 Mathematical Expectation
Hence, a = , b = 2.
5
E7) Let X be the number on the first card and Y be the number on the
second card. Then probability distribution of X is:
X 1 2 3 4 5 6 7 8 9 10
px 1 1 1 1 1 1 1 1 1 1
10 10 10 10 10 10 10 10 10 10
px 1 1 1 1 1 1 1 1 1 1
10 10 10 10 10 10 10 10 10 10
1 1 1
E(X) = E(Y) = 1 2 ... 10
10 10 10
1 1
1 2 3 4 5 6 7 8 9 10 = 55 5.5
10 10
and hence the required expected value is
E X Y E X E Y = 5.5 + 5.5 = 11
E8) Let X be the number obtained on the first die and Y be the number
obtained on the second die.
7
Then E(X) = E(Y) = . [See Example 3 given in Section 8.2]
2
Hence, the required expected value is
E XY E X E Y [Using multiplication theorem of expectation]
7 7 49
= = .
2 2 4
77