Probability Distributions: Lecture #5

Download as pdf or txt
Download as pdf or txt
You are on page 1of 50

Lecture #5

Probability Distributions

1
SPSS Statistics
 Go to IBM site.
 Look for SPSS Statistics 27.0 trial version.
 Choose your environment( OS on which u r trying to
install)
 Download the trial version. ( Valid for 14 days)
 Warning- It does not allow installing again on same
machine after it is expired.
 U might have to take backup, reformat your system,
install from OS onwards !!

2
Today’s discussion…
 Probability vs. Statistics

 Concept of random variable

 Probability distribution concept

 Discrete probability distribution

 Discrete uniform probability distribution

 Binomial distribution

 Multinomial distribution

 Hypergeometric distribution

 Poisson distribution

3
Today’s discussion
• Continuous probability distribution

 Continuous uniform probability distribution

 Normal distribution

 Standard normal distribution

 Chi-squared distribution

 Gamma distribution

 Exponential distribution

 Lognormal distribution

 Weibull distribution

4
Probability and Statistics
Probability is the chance of an outcome in an experiment (also called event).

Event: Tossing a fair coin


Outcome: Head, Tail

Probability deals with predicting the Statistics involves the analysis of the
likelihood of future events. frequency of past events

Example: Consider there is a drawer containing 100 socks: 30 red, 20 blue and
50 black socks.
We can use probability to answer questions about the selection of a
random sample of these socks.
 PQ1. What is the probability that we draw two blue socks or two red socks from
the drawer?
 PQ2. What is the probability that we pull out three socks or have matching pair?
 PQ3. What is the probability that we draw five socks and they are all black?

5
Statistics
Instead, if we have no knowledge about the type of socks in the drawers, then we
enter into the realm of statistics. Statistics helps us to infer properties about the
population on the basis of the random sample.

Questions that would be statistical in nature are:

 SQ1: A random sample of 10 socks from the drawer produced one blue, four red, five
black socks. What is the total population of black, blue or red socks in the drawer?

 SQ2: We randomly sample 10 socks, and write down the number of black socks and
then return the socks to the drawer. The process is done for five times. The mean
number of socks for each of these trial is 7. What is the true number of black socks in
the drawer?
 etc.

6
Probability vs. Statistics
In other words:
 In probability, we are given a model and asked what kind of data we are likely to
see.
 In statistics, we are given data and asked what kind of model is likely to have
generated it.

Example 4.1: Measles Study


 A study on health is concerned with the incidence of childhood measles in parents of
childbearing age in a city. For each couple, we would like to know how likely, it is that
either the mother or father or both have had childhood measles.
 The current census data indicates that 20% adults between the ages 17 and 35
(regardless of sex) have had childhood measles.
 This give us the probability that an individual in the city has had childhood measles.

7
Defining Random Variable
Definition 4.1: Random Variable
A random variable is a rule that assigns a numerical value to an outcome of
interest.

Example 4.2: In “measles Study”, we define a random variable 𝑋 as the number


of parents in a married couple who have had childhood measles.
This random variable can take values of 0, 1 𝑎𝑛𝑑 2.
Note:
 Random variable is not exactly the same as the variable defining a data.

 The probability that the random variable takes a given value can be computed
using the rules governing probability.
 For example, the probability that 𝑋 = 1 means either mother or father but not both
has had measles is 0.32. Symbolically, it is denoted as P(X=1) = 0.32
8
Probability Distribution
Definition 4.2: Probability distribution
A probability distribution is a definition of probabilities of the values of
random variable.

Example 4.3: Given that 0.2 is the probability that a person (in the ages between
17 and 35) has had childhood measles. Then the probability distribution is given
by
X Probability

?
0 0.64
1 0.32
2 0.04

9
Probability Distribution
 In data analytics, the probability distribution is important with which many
statistics making inferences about population can be derived .

 In general, a probability distribution function takes the following form

𝒙 𝒙𝟏 𝒙𝟐 … … … … . . 𝒙𝒏
𝑓 𝑥 = 𝑃(𝑋 = 𝑥) 𝑓 𝑥1 𝑓 𝑥2 … … . . 𝑓(𝑥𝑛 )

Example: Measles Study


0.64

𝒙 0 1 2
𝑓 𝑥 0.64 0.32 0.04 0.32
f(x)
0.04

10
Taxonomy of Probability Distributions
Discrete probability distributions
Binomial distribution
Multinomial distribution
Poisson distribution
Hypergeometric distribution

Continuous probability distributions


Normal distribution
Standard normal distribution
Gamma distribution
Exponential distribution
Chi square distribution
Lognormal distribution
Weibull distribution

11
Usage of Probability Distribution
 Distribution (discrete/continuous) function is widely used in simulation
studies.
 A simulation study uses a computer to simulate a real phenomenon or process as
closely as possible.

 The use of simulation studies can often eliminate the need of costly experiments
and is also often used to study problems where actual experimentation is
impossible.

Examples 4.4:
1) A study involving testing the effectiveness of a new drug, the number of cured
patients among all the patients who use such a drug approximately follows a
binomial distribution.

2) Operation of ticketing system in a busy public establishment (e.g., airport), the


arrival of passengers can be simulated using Poisson distribution.

12
Discrete Probability Distributions

13
Binomial Distribution
 In many situations, an outcome has only two outcomes: success and failure.
 Such outcome is called dichotomous outcome.
 An experiment when consists of repeated trials, each with dichotomous outcome is called
Bernoulli process. Each trial in it is called a Bernoulli trial.

Example 4.5: Firing bullets to hit a target.


 Suppose, in a Bernoulli process, we define a random variable X ≡ the number of successes in
trials.
 Such a random variable obeys the binomial probability distribution, if the experiment satisfies
the following conditions:
1) The experiment consists of n trials.
2) Each trial results in one of two mutually exclusive outcomes, one labelled a “success” and
the other a “failure”.
3) The probability of a success on a single trial is equal to 𝒑. The value of 𝑝 remains constant
throughout the experiment.
4) The trials are independent.

14
Defining Binomial Distribution
Definition 4.3: Binomial distribution
The function for computing the probability for the binomial probability
distribution is given by
𝑛!
𝑓 𝑥 = 𝑝 𝑥 (1 − 𝑝)𝑛−𝑥
𝑥! 𝑛 − 𝑥 !
for x = 0, 1, 2, …., n
Here, 𝑓 𝑥 = 𝑃 𝑋 = 𝑥 , where 𝑋 denotes “the number of success” and 𝑋 = 𝑥
denotes the number of success in 𝑥 trials.

15
Binomial Distribution
Example 4.6: Measles study
X = having had childhood measles a success
p = 0.2, the probability that a parent had childhood measles
n = 2, here a couple is an experiment and an individual a trial, and the number
of trials is two.

Thus,
2! 0 (0.8)2−0
𝑃 𝑥 = 0 = 0! (0.2) = 𝟎. 𝟔𝟒
2−0 !

2!
𝑃 𝑥=1 = (0.2)1 (0.8)2−1 = 𝟎. 𝟑𝟐
1! 2 − 1 !

2!
𝑃 𝑥=2 = (0.2)2 (0.8)2−2 = 𝟎. 𝟎𝟒
2! 2 − 2 !

16
Binomial Distribution
Example 4.7: Verify with real-life experiment
Suppose, 10 pairs of random numbers are generated by a computer (Monte-Carlo method)

15 38 68 39 49 54 19 79 38 14

If the value of the digit is 0 or 1, the outcome is “had childhood measles”, otherwise,
(digits 2 to 9), the outcome is “did not”.
For example, in the first pair (i.e., 15), representing a couple and for this couple, x = 1. The
frequency distribution, for this sample is

x 0 1 2
f(x)=P(X=x) 0.7 0.3 0.0

Note: This has close similarity with binomial probability distribution!

17
The Multinomial Distribution
The binomial experiment becomes a multinomial experiment, if we let each trial has more
than two possible outcome.

Definition 4.4: Multinomial distribution


If a given trial can result in the k outcomes 𝐸1 , 𝐸2 , … … , 𝐸𝑘 with probabilities
𝑝1 , 𝑝2 , … … , 𝑝𝑘 , then the probability distribution of the random variables
𝑋1 , 𝑋2 , … … , 𝑋𝑘 representing the number of occurrences for 𝐸1 , 𝐸2 , … … , 𝐸𝑘 in
n independent trials is
𝑛
𝑓 𝑥1 , 𝑥2 , … … , 𝑥𝑘 = 𝑥1 ,𝑥2 ,……,𝑥𝑘
𝑝1 𝑥1 𝑝2 𝑥2 … … 𝑝𝑘 𝑥𝑘

𝑛 𝑛!
where 𝑥1 ,𝑥2 ,……,𝑥𝑘
=𝑥
1 !𝑥2 !……𝑥𝑘 !

σ𝑘𝑖=1 𝑥𝑖 = 𝑛 and σ𝑘𝑖=1 𝑝𝑖 = 1

18
The Hypergeometric Distribution
• Collection of samples with two strategies
• With replacement
• Without replacement
• A necessary condition of the binomial distribution is that all trials are
independent to each other.
• When sample is collected “with replacement”, then each trial in sample collection is
independent.

Example 4.8:
Probability of observing three red cards in 5 draws from an ordinary deck of 52
playing cards.
− You draw one card, note the result and then returned to the deck of cards
− Reshuffled the deck well before the next drawing is made
• The hypergeometric distribution does not require independence and is based on the
sampling done without replacement.

19
The Hypergeometric Distribution
 In general, the hypergeometric probability distribution enables us to find the
probability of selecting 𝑥 successes in 𝑛 trials from 𝑁 items.

Properties of Hypergeometric Distribution


• A random sample of size 𝑛 is selected without replacement from 𝑁 items.
• 𝑘 of the 𝑁 items may be classified as success and 𝑁 − 𝑘 items are classified as
failure.
Let 𝑋 denotes a hypergeometric random variable defining the number of successes.

Definition 4.5: Hypergeometric Probability Distribution

The probability distribution of the hypergeometric random variable 𝑋, the


number of successes in a random sample of size 𝑛 selected from 𝑁 items of
which 𝑘 are labelled success and 𝑁 − 𝑘 labelled as failure is given by
𝑘 𝑁−𝑘
𝑥 𝑛−𝑥
𝑓 𝑥 =𝑃 𝑋=𝑥 = 𝑁
𝑛
max(0, 𝑛 − (𝑁 − 𝑘)) ≤ 𝑥 ≤ min(𝑛, 𝑘)
20
Multivariate Hypergeometric Distribution
The hypergeometric distribution can be extended to treat the case where the N
items can be divided into 𝑘 classes 𝐴1 , 𝐴2 , … … , 𝐴𝑘 with 𝑎1 elements in the first
class 𝐴1 , … and 𝑎𝑘 elements in the 𝑘 𝑡ℎ class. We are now interested in the
probability that a random sample of size 𝑛 yields 𝑥1 elements from 𝐴1 , 𝑥2
elements from 𝐴2 , … … , 𝑥𝑘 elements from 𝐴𝑘 .

Definition 4.6: Multivariate Hypergeometric Distribution


If 𝑁 items are partitioned into 𝑘 classes 𝑎1 , 𝑎2 , … … , 𝑎𝑘 respectively, then
the probability distribution of the random variables 𝑋1 , 𝑋2 , … … , 𝑋𝑘 ,
representing the number of elements selected from 𝐴1 , 𝐴2 , … … , 𝐴𝑘 in a
random sample of size 𝑛, is
𝑎1 𝑎2 𝑎𝑘
𝑥1 𝑥2
…… 𝑥𝑘
𝑓 𝑥1 , 𝑥2 , … … , 𝑥𝑘 = 𝑃 𝑋1 = 𝑥1 , 𝑋2 = 𝑥2 , … … , 𝑋𝑘 = 𝑥𝑘 = 𝑁
𝑛
with σ𝑘𝑖=1 𝑥𝑖 = 𝑛 and σ𝑘𝑖=1 𝑎𝑖 = 𝑁

21
The Poisson Distribution
There are some experiments, which involve the occurring of the number of
outcomes during a given time interval (or in a region of space).
Such a process is called Poisson process.

Example 4.9:
Number of clients visiting a ticket selling counter in a metro station.

22
The Poisson Distribution
Properties of Poisson process
 The number of outcomes in one time interval is independent of the number that occurs
in any other disjoint interval [Poisson process has no memory]
 The probability that a single outcome will occur during a very short interval is
proportional to the length of the time interval and does not depend on the number of
outcomes occurring outside this time interval.
 The probability that more than one outcome will occur in such a short time interval is
negligible.

Definition 4.7: Poisson distribution


The probability distribution of the Poisson random variable 𝑋, representing
the number of outcomes occurring in a given time interval 𝑡, is

𝑒 −𝜆𝑡 . (𝜆𝑡)𝑥
𝑓 𝑥, 𝜆𝑡 = 𝑃 𝑋 = 𝑥 = , 𝑥 = 0, 1, … …
𝑥!

where 𝜆 is the average number of outcomes per unit time and 𝑒 = 2.71828 …

23
Descriptive measures
Given a random variable X in an experiment, we have denoted 𝑓 𝑥 = 𝑃 𝑋 = 𝑥 , the
probability that 𝑋 = 𝑥. For discrete events 𝑓 𝑥 = 0 for all values of 𝑥 except 𝑥 =
0, 1, 2, … . .

Properties of discrete probability distribution


1. 0 ≤ 𝑓(𝑥) ≤ 1
2. σ 𝑓 𝑥 = 1
3. 𝜇 = σ 𝑥. 𝑓(𝑥) [ is the mean ]
4. 𝜎 2 = σ 𝑥 − 𝜇 2 . 𝑓(𝑥) [ is the variance ]

In 2, 3 𝑎𝑛𝑑 4, summation is extended for all possible discrete values of 𝑥.


1
Note: For discrete uniform distribution, 𝑓 𝑥 = 𝑛 with 𝑥 = 1, 2, … … , 𝑛
𝑛
1
𝜇 = ෍ 𝑥𝑖
𝑛
𝑖=1
1 𝑛
and 𝜎 2 = σ𝑖=1(𝑥𝑖 −𝜇)2
𝑛

24
Descriptive measures
1. Binomial distribution
The binomial probability distribution is characterized with 𝑝 (the probability of
success) and 𝑛 (is the number of trials). Then

𝜇 = 𝑛. 𝑝

𝜎 2 = 𝑛𝑝 1 − 𝑝

2. Hypergeometric distribution
The hypergeometric distribution function is characterized with the size of a sample
(𝑛), the number of items (𝑁) and 𝑘 labelled success. Then

𝑛𝑘
𝜇=
𝑁

2
𝑁−𝑛 𝑘 𝑘
𝜎 = . n. (1 − )
𝑁−1 𝑁 𝑁
25
Descriptive measures
3. Poisson Distribution
The Poisson distribution is characterized with 𝜆𝑡 where 𝜆=
𝑡ℎ𝑒 𝑚𝑒𝑎𝑛 𝑜𝑓 𝑜𝑢𝑡𝑐𝑜𝑚𝑒𝑠 and 𝑡 = 𝑡𝑖𝑚𝑒 𝑖𝑛𝑡𝑒𝑟𝑣𝑎𝑙.

𝜇 = 𝜆𝑡
𝜎 2 = 𝜆𝑡

26
Continuous Probability Distributions

27
Continuous Probability Distributions

f(x)

x1 x2 x3 x4
X=x
Discrete Probability distribution

f(x)

X=x
Continuous Probability Distribution

28
Continuous Probability Distributions

 When the random variable of interest can take any value in an interval, it is
called continuous random variable.
 Every continuous random variable has an infinite, uncountable number of possible
values (i.e., any value in an interval)

 Consequently, continuous random variable differs from discrete random


variable.

29
Properties of Probability Density Function
The function 𝑓(𝑥) is a probability density function for the continuous random
variable 𝑋, defined over the set of real numbers 𝑅, if

1. 𝑓 𝑥 ≥ 0, for all 𝑥 ∈ 𝑅

2. ‫׬‬−∝ 𝑓 𝑥 𝑑𝑥 = 1

𝑏 f(x)
3. 𝑃 𝑎 ≤ 𝑋 ≤ 𝑏 = ‫𝑥𝑑 )𝑥(𝑓 𝑎׬‬

4. 𝜇 = ‫׬‬−∝ 𝑥𝑓(𝑥) 𝑑𝑥
a b

5. 𝜎 2 = ‫׬‬−∝ 𝑥 − 𝜇 2𝑓 𝑥 𝑑𝑥 X=x

30
Continuous Uniform Distribution
 One of the simplest continuous distribution in all of statistics is the continuous
uniform distribution.

Definition 4.8: Continuous Uniform Distribution

The density function of the continuous uniform random variable 𝑋 on the


interval [𝐴, 𝐵] is:
1
𝐴≤𝑥≤𝐵
𝑓 𝑥: 𝐴, 𝐵 = ൞𝐵 − 𝐴
0 𝑂𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

31
Continuous Uniform Distribution

f(x)
c

A B
X=x
Note:
−∞ 1
a) ‫𝐵 = 𝑥𝑑 𝑥 𝑓 ∞׬‬−𝐴 × (𝐵 − 𝐴) = 1
𝑑−𝑐
b) 𝑃(𝑐 < 𝑥 < 𝑑)= 𝐵−𝐴 where both 𝑐 and 𝑑 are in the interval (A,B)
𝐴+𝐵
c) 𝜇 = 2
2 (𝐵−𝐴)2
d) 𝜎 = 12

32
Normal Distribution
 The most often used continuous probability distribution is the normal
distribution; it is also known as Gaussian distribution.

 Its graph called the normal curve is the bell-shaped curve.

 Such a curve approximately describes many phenomenon occur in nature,


industry and research.
 Physical measurement in areas such as meteorological experiments, rainfall
studies and measurement of manufacturing parts are often more than adequately
explained with normal distribution.

 A continuous random variable X having the bell-shaped distribution is called a


normal random variable.

33
Normal Distribution
• The mathematical equation for the probability distribution of the normal variable
depends upon the two parameters 𝜇 and 𝜎, its mean and standard deviation.

f(x)
𝜎

𝜇
x

Definition 4.9: Normal distribution


The density of the normal variable 𝑥 with mean 𝜇 and variance 𝜎 2 is
2
(𝑥−𝜇)
1 − ൗ 2
𝑓 𝑥 = 𝑒 2𝜎 −∞ < 𝑥 < ∞
𝜎 2𝜋

where 𝜋 = 3.14159 … and 𝑒 = 2.71828 … . ., the Naperian constant


34
Normal Distribution
σ1 = σ2
σ1

σ2

µ1
µ1 µ
µ2 2 µ1 = µ2
Normal curves with µ1< µ2 and σ1 = σ2 Normal curves with µ1 = µ2 and σ1< σ2

σ1

σ2

µ1 µ2
Normal curves with µ1<µ2 and σ1<σ2 35
Properties of Normal Distribution
 The curve is symmetric about a vertical axis through the mean 𝜇.
 The random variable 𝑥 can take any value from −∞ 𝑡𝑜 ∞.
 The most frequently used descriptive parameter s define the curve itself.
 The mode, which is the point on the horizontal axis where the curve is a
maximum occurs at 𝑥 = 𝜇.
 The total area under the curve and above the horizontal axis is equal to 1.
∞ 1 ∞ − 1 2 (𝑥−𝜇)2
‫׬‬−∞ 𝑓 𝑥 𝑑𝑥 =
𝜎 2𝜋
‫׬‬−∞ 𝑒 2𝜎 𝑑𝑥 =1
1
∞ 1 ∞ − 2 (𝑥−𝜇)2
 𝜇= ‫׬‬−∞ 𝑥. 𝑓 𝑥 𝑑𝑥 = ‫׬‬−∞ 𝑥. 𝑒 2𝜎 𝑑𝑥
𝜎 2𝜋
1
1 ∞ −2[(𝑥−𝜇)ൗ𝜎2]
 𝜎2 = ‫׬‬−∞(𝑥 − 2
𝜇) . 𝑒 𝑑𝑥
𝜎 2𝜋
1 𝑥2 − 12 (𝑥−𝜇)2
 𝑃 𝑥1 < 𝑥 < 𝑥2 = ‫ 𝑒 𝑥׬‬2𝜎 𝑑𝑥
𝜎 2𝜋 1
denotes the probability of x in the interval (𝑥1 , 𝑥2 ). 𝜇 x1 x2

36
Standard Normal Distribution
 The normal distribution has computational complexity to calculate 𝑃 𝑥1 < 𝑥 < 𝑥2
for any two (𝑥1 , 𝑥2 ) and given 𝜇 and 𝜎
 To avoid this difficulty, the concept of 𝑧-transformation is followed.

𝑥−𝜇
z= [Z-transformation]
𝜎

 X: Normal distribution with mean 𝜇 and variance 𝜎 2 .


 Z: Standard normal distribution with mean 𝜇 = 0 and variance 𝜎 2 = 1.
 Therefore, if f(x) assumes a value, then the corresponding value of 𝑓(𝑧) is given by
1 𝑥2 − 1 2 (𝑥−𝜇)2
𝑓(𝑥: 𝜇, 𝜎) : 𝑃 𝑥1 < 𝑥 < 𝑥2 = 𝜎 2𝜋 ‫ 𝑒 𝑥׬‬2𝜎 𝑑𝑥
1
1 𝑧2 −1𝑧 2
= 𝜎 2𝜋 ‫ 𝑒 𝑧׬‬2 𝑑𝑧
1

= 𝑓(𝑧: 0, 𝜎)

37
Standard Normal Distribution
Definition 4.10: Standard normal distribution
The distribution of a normal random variable with mean 0 and variance 1 is called
a standard normal distribution.

0.09
0.4
0.08 σ σ=1
0.07
0.3
0.06

0.05
0.2
0.04

0.03

0.02 0.1

0.01

0.00 0.0
-5 0 5 10 15 20 25 -3 -2 -1 0 1 2 3

x=µ µ=0
f(x: µ, σ) f(z: 0, 1)

38
Gamma Distribution
The gamma distribution derives its name from the well known gamma function in
mathematics.

Definition 4.11: Gamma Function


𝛼
Γ 𝛼 = ‫׬‬0 𝑥 𝛼−1 𝑒 −𝑥 𝑑𝑥 for 𝛼 > 0

Integrating by parts, we can write,


𝛼

Γ 𝛼 = 𝛼 − 1 න 𝑥 𝛼−2 𝑒 −𝑥 𝑑𝑥
0
= 𝛼−1 Γ 𝛼−1

Thus Γ function is defined as a recursive function.

39
Gamma Distribution
When 𝛼 = 𝑛, we can write,
Γ 𝑛 = 𝑛 − 1 𝑛 − 2 … … … . Γ(1)

= 𝑛 − 1 𝑛 − 2 … … … .3.2.1

= 𝑛−1 !


Further, Γ 1 = ‫׬‬0 𝑒 −𝑥 𝑑𝑥 = 1

Note:
1
Γ = 𝜋 [An important property]
2

40
Gamma Distribution
Definition 4.12: Gamma Distribution

The continuous random variable 𝑥 has a gamma distribution with parameters 𝛼


and 𝛽 such that:
1 𝑥
𝛼−1 −𝛽
𝑥 𝑒 𝑥>0
𝑓 𝑥: 𝛼, 𝛽 = ൞ 𝛽 𝛼 Γ(𝛼)
0 𝑂𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
where 𝛼 > 0 and 𝛽>0

1.0
σ=1, β=1

0.8

0.6

f(x)
0.4
σ=2, β=1

0.2
σ=4, β=1

0.0
0 2 4 6 8 10 12
x

41
Exponential Distribution
Definition 4.13: Exponential Distribution
The continuous random variable 𝑥 has an exponential distribution with parameter
𝛽 , where:
𝑥
1 −𝛽
𝑒
𝑓 𝑥: 𝛽 = ൞𝛽 where 𝛽 > 0
0

Note:
1) The mean and variance of gamma distribution are
𝜇 = 𝛼𝛽
𝜎 2 = 𝛼𝛽 2
2) The mean and variance of exponential distribution are
𝜇=𝛽
𝜎 2 = 𝛽2

42
Chi-Squared Distribution
Definition 4.14: Chi-squared distribution
The continuous random variable 𝑥 has a Chi-squared distribution with 𝑣 degrees
of freedom, is given by
1 𝑣ൗ −1 −𝑥
𝑣 𝑥 2 𝑒 2 , 𝑥>0
𝑓 𝑥: 𝑣 = 22 Γ(𝑣/2)
0 𝑂𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
where 𝑣 is a positive integer.

• The Chi-squared distribution plays an important role in statistical inference .


• The mean and variance of Chi-squared distribution are:

𝜇 = 𝑣 and 𝜎 2 = 2𝑣

43
Lognormal Distribution
The lognormal distribution applies in cases where a natural log transformation
results in a normal distribution.

Definition 4.15: Lognormal distribution

The continuous random variable 𝑥 has a lognormal distribution if the random


variable 𝑦 = ln(𝑥) has a normal distribution with mean 𝜇 and standard deviation
𝜎. The resulting density function of 𝑥 is:

1 1
− 2 [ln 𝑥 −𝜇]2
𝑒 2𝜎 𝑥≥0
𝑓 𝑥: 𝜇, 𝜎 = ൞𝜎𝑥 2𝜋
0 𝑥<0

44
Lognormal Distribution

0.7

µ=0, σ=1
0.6

0.5

0.4
f(x)
0.3

0.2
µ=1, σ=1
0.1

0.0
0 5 10 15 20
x

45
Weibull Distribution
Definition 4.16: Weibull Distribution

The continuous random variable 𝑥 has a Weibull distribution with parameter 𝛼


and 𝛽 such that.
𝛽
𝛼𝛽𝑥 𝛽−1 𝑒 −𝛼𝑥 𝑥>0
𝑓 𝑥: 𝛼, 𝛽 = ൞
0 𝑂𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
where 𝛼 > 0 and 𝛽 > 0

1.0
ß=1
The mean and variance of Weibull
0.8
distribution are:
−1 1
0.6 ൗ𝛽
f(x)
𝜇=𝛼 Γ 1+𝛽
0.4
ß=2

0.2 −2ൗ 2 1
ß=3.5
𝜎2 = 𝛼 𝛽 {Γ 1+ − [Γ(1 + )]2 }
0.0
𝛽 𝛽
0 2 4 6 8 10 12
x

46
Reference

 The detail material related to this lecture can be found in

Probability and Statistics for Enginneers and Scientists (8th Ed.)


by Ronald E. Walpole, Sharon L. Myers, Keying Ye (Pearson), 2013.

47
Questions of the day…
1. Give some examples of random variables? Also, tell the
range of values and whether they are with continuous or
discrete values.

2. In the following cases, what are the probability


distributions are likely to be followed. In each case, you
should mention the random variable and the parameter(s)
influencing the probability distribution function.
a) In a retail source, how many counters should be opened at a given
time period.
b) Number of people who are suffering from cancers in a town?

48
Questions of the day…
2. In the following cases, what are the probability
distributions are likely to be followed. In each case, you
should mention the random variable and the parameter(s)
influencing the probability distribution function.
c) A missile will hit the enemy’s aircraft.
d) A student in the class will secure EX grade.
e) Salary of a person in an enterprise.
f) Accident made by cars in a city.
g) People quit education after i) primary ii) secondary and iii) higher
secondary educations.

49
Questions of the day…
3. How you can calculate the mean and standard deviation of
a population if the population follows the following
probability distribution functions with respect to an event.
a) Binomial distribution function.
b) Poisson’s distribution function.
c) Hypergeometric distribution function.
d) Normal distribution function.
e) Standard normal distribution function.

50

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy