Chapter Two
Chapter Two
CHAPTER TWO
2. RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS
2.1. The Concept & Definition of a Random Variable
Random Variable: A variable whose value is unknown or a function that assigns values to
each of an experiment's outcomes. Random variables are often designated by capital letters
and can be classified as discrete, which are variables that have specific values, or continuous,
which are variables that can have any values within a continuous range.
A random variable is a variable that has a single numerical value, determined by
chance, for each outcome of a procedure.
A random variable is discrete if it takes on a countable number of values (i.e.
there are gaps between values).
A random variable is continuous if there are an infinite number of values the
random variable can take, and they are densely packed together (i.e. there are no
gaps between values).
A probability distribution is a description of the chance a random variable has of
taking on particular values. It is often displayed in a graph, table, or formula.
A probability histogram is a display of a probability distribution of a discrete
random variable. It is a histogram where each bar’s height represents the
probability that the random variable takes on a particular value.
Random variable is a variable X whose value is determined by the outcomes of
random experiment. It is classified as;
Discrete random variable
Continuous random variable
2.2. Discrete Random Variables and their Probability Distributions
Possible values of isolated points along the number line. Random variables have
their own sample space, or set of possible values. If this set is finite or countable,
the random variable is said to be discrete.
It is a random variable which can assume only a countable numbers of real
values.
If X is a discrete random variable taking a values X1,X2,….,Xn then
P(xi)=P(X=xi),where i=1,2,3,….,n is called probability mass function (pmf) of
random variable X
The set of order pairs [xi = P(xi)], i=1,2,3,….,n gives the probability distribution of
random variable X
Discrete probability distribution: A probability distribution describes the possible
values and their probability of occurring.
Discrete probability distribution is called probability mass function (pmf), p(.) and
need to satisfy following conditions
Properties of P(X=xi) where X is Discrete random variable:
1. 0 ≤ P(X=xi) ≤ 1
2. ∑ 1
Example: consider the experiment of tossing a coin three times let ‘x’ be the number of
heads then write the probability distribution of the random variable x
Solution:
The variable ‘x’ takes the value 0,1,2,3 with probability distribution {HHH, HHT, HTH,
TTH, THT, THH, HTT, TTT} then the probability distribution for ‘x’
X 0 1 2 3
Example: Suppose we record four consecutive baby births in a hospital. Let X be the
difference in the number of girls and boys born. X is discrete, since it can only have the
values 0, 2, or 4. (Why can’t it be 1 or 3?) Furthermore, if we write out the sample space for
this procedure, we can find the probability that X equals 0, 2, or 4:
S={mmmm, mmmf, mmfm, mfmm, fmmm, mmff, mfmf, mffm, fmfm, ffmm, fmmf, mfff,
fmff, ffmf, fffm, ffff}
There are 16 total cases, and each one is equally likely.
P(X = 0) = 6/16 = 0.375 (these are the cases mmff, mfmf, mffm, fmfm, ffmm, fmmf)
P(X = 2) = 8/16 = 0.5 (these are the cases mmmf, mmfm, mfmm, fmmm, mfff, fmff, ffmf,
fffm)
P(X = 4) = 2/16 = 0.125 (these are the cases mmmm and ffff)
Is this a probability distribution?
∑ = 0.375+ 0.5 +0.125= 1
0 ≤ 0.125 < 0.375 < 0.5 ≤ 1 So it is a probability distribution
Another Example: Is the following a probability distribution? If Y= ,X=1,2,…,11
X 1 4 9 16 25 36 49 64 81 100 121
P(x) 0.20 0.15 0.14 0.12 0.10 0.09 0.07 0.05 0.04 0.02 0.01
Clearly, each probability is between 0 and 1. Now we need to see if they sum to 1.
∑ = 0.20+ 0.15+ 0.14 + 0.12+ 0.10 + 0.09+ 0.07 +0.05 +0.04+ 0.02+ 0.01=
0.99 since they do not sum to 1, it is not a probability distribution.
2.3. Continuous Random Variables and their Probability Density Functions
A random variable X is said to be continuous if it can take all possible values (integral as
well as fractional) between certain limits or intervals. Continuous random variables occur
when we deal with quantities that are measured on a continuous scale. For instance, the
life length of an electric bulb, the speed of a car, weights, heights, and the like are
continuous
If X is a continuous random variable that assume a values X1,X2,X3,…. then where
i=1,2,3,….is called probability density function (pdf) of random variableX
Properties of Pdf:
1. 0 ≤ ≤1
2. ∫
4. P(x1<X<x2)=∫
5. P(X=a)=∫ =0
∫ = )| 9c=c=1/9
c) ∫ = |
X 0 1 2 3 4 5 6
What is the expected number of episodes of diarrhea in the first 2 years of life?
Solution:E (X) = ∑ = 0.P(X=0) +1.P (X=1) +2P(X=2) + … + 6P(X=6)
= 0 (0.129) + 1(0.264) +2(0.271) + … + 6(0.017) = 2.038
Thus, on the average a child would be expected to have 2 episodes of diarrhea in the first 2
years of life.
Example: A construction firm has recently sent in bids for 3 jobs worth (in profits) 10, 20,
and 40 (thousand) dollars. If its probabilities of winning the jobs are respectively 0.2,0 .8, and
0.3, what is the firm’s expected total profit?
Solution: Letting Xi, i =1, 2, 3 denote the firm’s profit from jobi, then
Total profit=
So
EXAMPLE: Let the random variable X be defined as follows. Suppose that X is the time (in
minutes) during which electrical equipment is used at maximum load in a certain specified
time period. Suppose that X is a continuous random variable with the following pdf;
∫ ∫ ∫
∫ ∫
∫ ∫
| ( )|
=1500 minutes
Example:A large domestic automobile manufacturer mails out quarterly customer
satisfaction surveys to owners who have purchased new automobiles within the last 3 years.
The proportion of surveys returned in any given quarter is the outcome of a random variable
X having density function { . What is the expected proportion of
surveys returned in any given quarter?
Solution: By definition,
E(x) =∫ =∫ =∫ = | = 0.75
Properties of Variances
For any random variable X and Y and constant a and b, it can be shown that
-
-
-
i.e., (∑ ) ∑
x2
Example: Compute the variance of f(x) = 9 for 0 < x < 3
V(x) = E(x2) – [E(x)]2
∫ ∫ ∫ |
∫ ∫ ∫ |
The first moment about the origin is simply the expectation of the random Variable X,
i.e, = E( )= E(x) a quantity that we have examined at the beginning of our discussion
of mathematical expectation. This balancing point of a density function, or the weighted
average of the elements in the range of the random variable, will be given a special name and
symbol.
The first moment about the origin of a random variable,X, is called the mean of the
random variable X(or mean of the density function of X), and will be denoted by the
symbol . Thus, the first moment about the origin characterizes the central tendency of a
density function. Measures of spread and skewness of a density function are given by
certain moments about the mean
2.5.1.2. Moment about the Mean
Let X be a random variable with density function f(x). Then the rth central moment of X (or
the rth moment of X about the mean), denoted by , is defined as
∑
) ={
∫
Note that =1 for any discrete or continuous random variable, since
E( )= E(1)=1 . Furthermore, = 0 for any discrete or continuous random
variable for which E(X) exists, since E( ) =E( = =0
The second central moment is given a special name and symbol. The second central
moment, E( ), of a random variable, X, is called the variance of the random
variable X (or the variance of the density function of X), and will be denoted by the
symbol , or by Var(x).
The nonnegative square root of the variance of a random variable ,X, (i.e., √ ) is
called the standard deviation of the random variable (or standard deviation of the
density function of X) and will be denoted by the symbols ,or by Std(X).
The relationship between raw moment and central moment
I.
II.
III.
IV.
Exercise:
=E( )=
Example: Suppose that X is binomially distributed with parameters n and p. Then the
moment generating function is defined as Mx(t)= then find the 1st and the 2nd
moment then find the mean and variance of the binomial distribution.
Solution:
M’x(t)=
M’’x(t)=
Therefore E(x)=M’x(0)=np and E( )=M”x(0)=
Var(x) =
Theorem: Suppose that the random variable X has MGF Mx(t). Let Y= aX +b. Then My(t),
the MGF of the random variable Y, is given by; My(t)= Mx(at)
Example: The random variable X has the density function { . Find the
MGF, and use it to define the mean and variance of X.
Solution: Mx(t)=∫ =∫ =∫ = |
= = , provided if t<1