0% found this document useful (0 votes)
16 views

Chapter Two

Chapter Two of 'Statistics for Economists' covers random variables and probability distributions, defining random variables as functions that assign values to outcomes of experiments, categorized as discrete or continuous. It explains discrete random variables with probability mass functions and continuous random variables with probability density functions, detailing their properties and examples. The chapter also introduces concepts of expected value and variance, providing formulas and examples for calculating these metrics for both discrete and continuous random variables.

Uploaded by

design
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Chapter Two

Chapter Two of 'Statistics for Economists' covers random variables and probability distributions, defining random variables as functions that assign values to outcomes of experiments, categorized as discrete or continuous. It explains discrete random variables with probability mass functions and continuous random variables with probability density functions, detailing their properties and examples. The chapter also introduces concepts of expected value and variance, providing formulas and examples for calculating these metrics for both discrete and continuous random variables.

Uploaded by

design
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Statistics for Economists (Econ2042) 2024/2025

CHAPTER TWO
2. RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS
2.1. The Concept & Definition of a Random Variable
Random Variable: A variable whose value is unknown or a function that assigns values to
each of an experiment's outcomes. Random variables are often designated by capital letters
and can be classified as discrete, which are variables that have specific values, or continuous,
which are variables that can have any values within a continuous range.
 A random variable is a variable that has a single numerical value, determined by
chance, for each outcome of a procedure.
 A random variable is discrete if it takes on a countable number of values (i.e.
there are gaps between values).
 A random variable is continuous if there are an infinite number of values the
random variable can take, and they are densely packed together (i.e. there are no
gaps between values).
 A probability distribution is a description of the chance a random variable has of
taking on particular values. It is often displayed in a graph, table, or formula.
 A probability histogram is a display of a probability distribution of a discrete
random variable. It is a histogram where each bar’s height represents the
probability that the random variable takes on a particular value.
Random variable is a variable X whose value is determined by the outcomes of
random experiment. It is classified as;
 Discrete random variable
 Continuous random variable
2.2. Discrete Random Variables and their Probability Distributions
 Possible values of isolated points along the number line. Random variables have
their own sample space, or set of possible values. If this set is finite or countable,
the random variable is said to be discrete.
 It is a random variable which can assume only a countable numbers of real
values.
 If X is a discrete random variable taking a values X1,X2,….,Xn then
P(xi)=P(X=xi),where i=1,2,3,….,n is called probability mass function (pmf) of
random variable X

By: Habitamu W. Page 1


Statistics for Economists (Econ2042) 2024/2025

 The set of order pairs [xi = P(xi)], i=1,2,3,….,n gives the probability distribution of
random variable X
Discrete probability distribution: A probability distribution describes the possible
values and their probability of occurring.
Discrete probability distribution is called probability mass function (pmf), p(.) and
need to satisfy following conditions
Properties of P(X=xi) where X is Discrete random variable:
1. 0 ≤ P(X=xi) ≤ 1
2. ∑ 1
Example: consider the experiment of tossing a coin three times let ‘x’ be the number of
heads then write the probability distribution of the random variable x
Solution:
The variable ‘x’ takes the value 0,1,2,3 with probability distribution {HHH, HHT, HTH,
TTH, THT, THH, HTT, TTT} then the probability distribution for ‘x’

X 0 1 2 3

P(X) 1/8 3/8 3/8 1/8

Example: Suppose we record four consecutive baby births in a hospital. Let X be the
difference in the number of girls and boys born. X is discrete, since it can only have the
values 0, 2, or 4. (Why can’t it be 1 or 3?) Furthermore, if we write out the sample space for
this procedure, we can find the probability that X equals 0, 2, or 4:
S={mmmm, mmmf, mmfm, mfmm, fmmm, mmff, mfmf, mffm, fmfm, ffmm, fmmf, mfff,
fmff, ffmf, fffm, ffff}
There are 16 total cases, and each one is equally likely.
P(X = 0) = 6/16 = 0.375 (these are the cases mmff, mfmf, mffm, fmfm, ffmm, fmmf)
P(X = 2) = 8/16 = 0.5 (these are the cases mmmf, mmfm, mfmm, fmmm, mfff, fmff, ffmf,
fffm)
P(X = 4) = 2/16 = 0.125 (these are the cases mmmm and ffff)
Is this a probability distribution?
 ∑ = 0.375+ 0.5 +0.125= 1
 0 ≤ 0.125 < 0.375 < 0.5 ≤ 1 So it is a probability distribution
Another Example: Is the following a probability distribution? If Y= ,X=1,2,…,11

X 1 4 9 16 25 36 49 64 81 100 121

By: Habitamu W. Page 2


Statistics for Economists (Econ2042) 2024/2025

P(x) 0.20 0.15 0.14 0.12 0.10 0.09 0.07 0.05 0.04 0.02 0.01

 Clearly, each probability is between 0 and 1. Now we need to see if they sum to 1.
 ∑ = 0.20+ 0.15+ 0.14 + 0.12+ 0.10 + 0.09+ 0.07 +0.05 +0.04+ 0.02+ 0.01=
0.99 since they do not sum to 1, it is not a probability distribution.
2.3. Continuous Random Variables and their Probability Density Functions
 A random variable X is said to be continuous if it can take all possible values (integral as
well as fractional) between certain limits or intervals. Continuous random variables occur
when we deal with quantities that are measured on a continuous scale. For instance, the
life length of an electric bulb, the speed of a car, weights, heights, and the like are
continuous
 If X is a continuous random variable that assume a values X1,X2,X3,…. then where
i=1,2,3,….is called probability density function (pdf) of random variableX
Properties of Pdf:
1. 0 ≤ ≤1

2. ∫

3. probability density function (pdf) integral over arrange of [-∞,∞]

4. P(x1<X<x2)=∫

5. P(X=a)=∫ =0

Example: suppose we have a continuous random variable’ X’ with probability density


function is given by {

A. Determine the value of ‘c’


B. Verify that f is pdf
C. Calculate
Solution:
a) ∫ =1 property of pdf

∫ = )| 9c=c=1/9

b) ∫ dx=1=( Then f is pdf

c) ∫ = |

By: Habitamu W. Page 3


Statistics for Economists (Econ2042) 2024/2025

2.4. The Expected Value of a Random Variable


The definition of the expectation of a random variable can be motivated both by the concept
of a weighted average and through the use of the physics concept of center of gravity, or the
balancing point of a distribution of weights. We first examine the case of a discrete random
variable and look at a problem involving the balancing-point concept.
The averaging process, when applied to a random variable is called expectation. It is denoted
by E(X) or and is read as the expected value of X or the mean value of X.
i. Expectation of a Random Variable: Discrete Case
The expected value of a discrete random variable is defined by E(x) =∑ ; provided
the sum exists. Since P(x) 0, and∑ , the expected value of a discrete
random variable can also be straightforwardly interpreted as a weighted average of the
possible outcomes (or range elements) of the random variable. In this context the weight
assigned to a particular outcome of the random variable is equal to the probability that the
outcome occurs (as given by the value of P(x))
Example 1: Consider the random variable representing the number of episodes of diarrhea in
the first 2 years of life. Suppose this random variable has a probability mass function as
below

X 0 1 2 3 4 5 6

P(X = x) 0.129 0.264 0.271 0.185 0.095 0.039 0.017

What is the expected number of episodes of diarrhea in the first 2 years of life?
Solution:E (X) = ∑ = 0.P(X=0) +1.P (X=1) +2P(X=2) + … + 6P(X=6)
= 0 (0.129) + 1(0.264) +2(0.271) + … + 6(0.017) = 2.038
Thus, on the average a child would be expected to have 2 episodes of diarrhea in the first 2
years of life.
Example: A construction firm has recently sent in bids for 3 jobs worth (in profits) 10, 20,
and 40 (thousand) dollars. If its probabilities of winning the jobs are respectively 0.2,0 .8, and
0.3, what is the firm’s expected total profit?
Solution: Letting Xi, i =1, 2, 3 denote the firm’s profit from jobi, then
Total profit=
So

Therefore the firm’s expected total profit is 30 thousand dollars.


ii. Expected Value of a Random Variable: Continuous Case

By: Habitamu W. Page 4


Statistics for Economists (Econ2042) 2024/2025

The expected value of the continuous random variable X is defined by

E(x) ∫ , provided the integral exists.

EXAMPLE: Let the random variable X be defined as follows. Suppose that X is the time (in
minutes) during which electrical equipment is used at maximum load in a certain specified
time period. Suppose that X is a continuous random variable with the following pdf;

Solution: By definition, E(x)=∫

∫ ∫ ∫

∫ ∫

∫ ∫

| ( )|

=1500 minutes
Example:A large domestic automobile manufacturer mails out quarterly customer
satisfaction surveys to owners who have purchased new automobiles within the last 3 years.
The proportion of surveys returned in any given quarter is the outcome of a random variable
X having density function { . What is the expected proportion of
surveys returned in any given quarter?
Solution: By definition,

E(x) =∫ =∫ =∫ = | = 0.75

The expected proportion of surveys returned in any given quarter is 0.75


Properties of Expectation
If X and Y are random variables and a, b are constants then:
1. E(k) = k, where k is any constant 4. E(X + Y) = E(X) +E(Y)
2. E (kX) = k E(X), where k is any 5. E(X) ≥ 0, if X ≥ 0.
constant 6. |E(X)| ≤ E(|X|)
3. E (X + k) =E(X) + k
By: Habitamu W. Page 5
Statistics for Economists (Econ2042) 2024/2025

7. |E(XY)2| ≤ E(X2) E(Y2).

8. E(XY) = E(X) E(Y), if X, Y are independent random variables

2.4.1. Variance of random variable


The variance of a random variable measures the variability of the random variable about its
expected value.
Definition: Let X be a random variable. We define the variance of X, denoted by V(X) or
, as follows:
Mean of X = E(X)
Variance of X = =
The positive square root of V(X) is called the standard deviation of X and is denoted by .
Proof: Expanding and using the previously established properties for
expectation, we obtain

Case 1: Variance for discrete random variable


If X is a discrete random variable with expected value μ then the variance of X, denoted by
Var (X), is defined by:
Var(X) = E(X-μ)2 = E(X2) – μ2= ∑
Alternatively, Var(X) = ∑
Case 2: Variance for continuous random variable
If X is a continuous random variable, then var (X), ∫ ̅

Properties of Variances
 For any random variable X and Y and constant a and b, it can be shown that
-
-
-

By: Habitamu W. Page 6


Statistics for Economists (Econ2042) 2024/2025

- if X1, X2 ……, Xn are independent random variables, then

i.e., (∑ ) ∑

x2
Example: Compute the variance of f(x) = 9 for 0 < x < 3
V(x) = E(x2) – [E(x)]2

∫ ∫ ∫ |

∫ ∫ ∫ |

Therefore, V(x) = E(x2) – [E(x)]2

2.5. Moments and Moment Generating Functions


2.5.1. Moments of a Random Variable
The expectations of certain power functions of a random variable have uses as measures of
central tendency, spread or dispersion, and skewness of the density function of the random
variable, and also are important components of statistical inference procedures that we will
study in later chapters. These special expectations are called moments of the random variable
(or of the density function). There are two types of moments that we will be concerned with
moments about the origin and moments about the mean.
It tells us information about the shape of the distribution. The nature of the distribution can be
identified by looking on various moment values.
2.5.1.1. Moment about the Origin
LetXbe a random variable with density function f(x). Then the rth moment ofXabout the
origin, denoted by , is defined for integers r>0as

E( ) ={

The value ofrin the definition of moments is referred to as the order of the moment, so that
one would refer to E( ) as themoment of order r. Note that 1 for any discrete or
continuous random variable, since = E( )= E(1) = 1.

By: Habitamu W. Page 7


Statistics for Economists (Econ2042) 2024/2025

The first moment about the origin is simply the expectation of the random Variable X,
i.e, = E( )= E(x) a quantity that we have examined at the beginning of our discussion
of mathematical expectation. This balancing point of a density function, or the weighted
average of the elements in the range of the random variable, will be given a special name and
symbol.
 The first moment about the origin of a random variable,X, is called the mean of the
random variable X(or mean of the density function of X), and will be denoted by the
symbol . Thus, the first moment about the origin characterizes the central tendency of a
density function. Measures of spread and skewness of a density function are given by
certain moments about the mean
2.5.1.2. Moment about the Mean
Let X be a random variable with density function f(x). Then the rth central moment of X (or
the rth moment of X about the mean), denoted by , is defined as

) ={

Note that =1 for any discrete or continuous random variable, since
E( )= E(1)=1 . Furthermore, = 0 for any discrete or continuous random
variable for which E(X) exists, since E( ) =E( = =0
The second central moment is given a special name and symbol. The second central
moment, E( ), of a random variable, X, is called the variance of the random
variable X (or the variance of the density function of X), and will be denoted by the
symbol , or by Var(x).

The nonnegative square root of the variance of a random variable ,X, (i.e., √ ) is
called the standard deviation of the random variable (or standard deviation of the
density function of X) and will be denoted by the symbols ,or by Std(X).
The relationship between raw moment and central moment
I.
II.

III.
IV.
Exercise:

1. If X is a random variable having a probability mass function {

Find the mean and variance of the random variable of X

By: Habitamu W. Page 8


Statistics for Economists (Econ2042) 2024/2025

2. If X is a random variable having a probability density


function { Find the mean and variance of the random
variable of X
2.5.2. Moment Generating Functions
The expectation of results in a function of t that, when differentiated with respect to the
argument and then evaluated at t=0, generates moments of X about the origin. The function is
aptly called the moment-generating function of X.
Definition: The expected value of is defined to be the moment-generating function of X if
the expected value exists for every value of t in some open interval containing 0,
i.e. .The moment generating function of X will be denoted by Mx (t), and
is represented by:

Mx(t) =E( ) ={

Note that Mx(0) =E( ) =E(1)=1 is always defined, and from this property it is clear that a
function of t cannot be a MGF unless the value of the function at t=0 is 1. The condition that
Mx(t) must be defined t (-h,h) is a technical condition that ensures Mx(t) is differentiable at
the point zero, a property whose importance will become evident shortly.
We now indicate how the MGF can be used to generate moments about the origin. In the
following theorem, we use the notation to indicate the rth derivative of g(x) with respect
to x evaluated at x=a.
Theorem: Let X be a Random Variable for which the MGF, Mx(t),exists. Then

=E( )=

Example: Suppose that X is binomially distributed with parameters n and p. Then the
moment generating function is defined as Mx(t)= then find the 1st and the 2nd
moment then find the mean and variance of the binomial distribution.
Solution:
M’x(t)=
M’’x(t)=
Therefore E(x)=M’x(0)=np and E( )=M”x(0)=
Var(x) =
Theorem: Suppose that the random variable X has MGF Mx(t). Let Y= aX +b. Then My(t),
the MGF of the random variable Y, is given by; My(t)= Mx(at)

By: Habitamu W. Page 9


Statistics for Economists (Econ2042) 2024/2025

Example: The random variable X has the density function { . Find the
MGF, and use it to define the mean and variance of X.

Solution: Mx(t)=∫ =∫ =∫ = |

= = , provided if t<1

By: Habitamu W. Page 10

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy