3.0 Common Probability Distribution

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 94

COMMON PROBABILITY

DISTRIBUTION
RAW DATA TO PROBABILITY

Histogram Probability
Data Hypothesise Parameter Goodness- Probability,
& Central Density
Collection Distribution Estimation of-Fit Test P(x)
Moment Function

Topic 1 & 2 Topic 3


INTRODUCTION
• In evaluating structural reliability, several types of
standardized probability distributions are used to model the
design parameters or random variables.

• Selection of the distribution function is an essential part of


obtaining probabilistic characteristics of structural systems.

• The selection of a particular type of distributions depends on


several factors.

• The selection or determination of distribution functions of


random variables is known as statistical tolerancing
INTRODUCTION
Selection of Distribution Criteria
The nature of the problem
The underlying assumptions associated with the
distribution
The shape of the curve between fx(x) and Fx(x)
obtained after estimating the data
The convenience and simplicity afforded by the
distribution in subsequent computations
NORMAL/GAUSSIAN DISTRIBUTION
• The Normal distribution, also known as the Gaussian distribution is the
most widely used distribution in engineering practice due to its
simplicity and convenience, especially a theoretical basis of the central
limit theorem.

• The central limit theorem states that the sum of many arbitrary
distribution random variables asymptotically follows as normal
distribution when the sample size becomes large.

• The distribution is often used for small coefficients of variation cases,


such as Young’s modulus, Poisson’s ratio, and other material properties.

• The parameters that define the Normal distribution are mean value, x
and standard deviation, x, denoted as N (,).
NORMAL/GAUSSIAN DISTRIBUTION
• One main and useful criterion of the Normal distribution is it
can be applied to any value of a random variable from - to +.

• The distribution is symmetric (bell curve) where the mean,


mode and median values are the same. Mean value can be
estimated directly from the observed data.

• The areas under the curve within one, two and three std are
about 68%, 95.5% and 99.7%.

• Any linear functions of normally distributed random variables


are also normally distributed.. A nonlinear function of normally
distributed random variables may or may not be normal.
NORMAL/GAUSSIAN DISTRIBUTION
• The PDF for the Normal distribution can be expressed as

1  x   x 2 
f x x   exp  
2 x 
2
2 x
2

• The CDF for Normal distribution is given by


x
2  1 2 
F x   1  exp 2 z 
2
NORMAL/GAUSSIAN DISTRIBUTION
• where
x  x
z
x

• or simplified in a form of standard Normal probability, N(0,1).

 x  x 
F x     
 x 

•  = standard Normal probability.


NORMAL/GAUSSIAN DISTRIBUTION
TABLE OF STANDARD NORMAL DISTRIBUTION
EXAMPLE 1
The length of human pregnancies from conception to birth approximates a normal
distribution with a mean of 266 days and a standard deviation of 16 days. What
proportion of all pregnancies will last between 240 and 270 days (roughly between
8 and 9 months)?
EXAMPLE 2
What length of time marks the shortest 70% of all pregnancies?
EXAMPLE 3
The average number of acres burned by forest and range fires in Simpanan Gunung
Inasa Forest, Kulim district is 4,300 acres per year, with a standard deviation of 750
acres. The distribution of the number of acres burned is normal. What is the
probability that between 2,500 and 4,200 acres will be burned in any given year?
EXAMPLE 4
A city installs 2000 electric lamps for street lighting. These lamps have a mean
burning life of 1000 hours with a standard deviation of 200 hours. The normal
distribution is a close approximation to this case.

a) What is the probability that a lamp will fail in the first 700 burning hours?
b) What is the probability that a lamp will fail between 900 and 1300 burning
hours?
c) How many lamps are expected to fail between 900 and 1300 burning hours?
d) What is the probability that a lamp will burn for exactly 900 hours?
e) What is the probability that a lamp will burn between 899 hours and 901 hours
before it fails?
f) After how many burning hours would we expect 10% of the lamps to be left?
EXAMPLE 4
a) What is the probability that a lamp will fail in the first 700 burning hours?
EXAMPLE 4
b) What is the probability that a lamp will fail between 900 and 1300 burning
hours?
EXAMPLE 4
EXAMPLE 4
c) How many lamps are expected to fail between 900 and 1300 burning
hours?
EXAMPLE 4
d) What is the probability that a lamp will burn for exactly 900 hours?
EXAMPLE 4
e) What is the probability that a lamp will burn between 899 hours and 901
hours before it fails?

Since this is an interval rather than a single exact value, the


probability of failure in this interval is not infinitesimal
(although in this instance the probability is small).
EXAMPLE 4
We could apply linear interpolation between the values given in Standard
Normal Distribution Table . However, considering that in practice the parameters
are not known exactly and the real distribution may not be exactly a normal
distribution, the extra precision is not worthwhile.
EXAMPLE 4
f) After how many burning hours would we expect 10% of the lamps to be left?
EXAMPLE 4
Once again, we could apply linear interpolation but the accuracy of the
calculation probably does not justify it. Since (0.90 – 0.8997) << (0.9015 –
0.90), let us take z1 = 1.28. Then we have;
TABLE OF STANDARD NORMAL DISTRIBUTION
TUTORIAL 1
If a cantilever beam supports two random loads with means
and std of 1=20 kN, 1=4 kN and 2=10 kN, 2=2, the bending
moment (M) and the shear force (V) at the fixed end due to the
two loads are M=L1F1+L2F2 and V=F1+F2 respectively.

(a) If two loads are independent, what are the mean and the
standard deviation of the shear and the bending moment at
the fix end?
(b) If two random loads are normally distributed, what is the
probability that the bending moment will exceed 235 kNm?
(c) If two loads are independent, what is the correlation
coefficient between V and M.
TUTORIAL 1
F1 F2

L1=6m

L1=9m
LOGNORMAL DISTRIBUTION
• Some engineering problems depend only on positive values of
variables.

• The Lognormal distribution is the most suitable distribution to be


used where negative random variables are to be avoided, in
addition, positive values can be fitted under the Lognormal
distribution.

• Unlike the Normal distribution, the Lognormal is unsymmetrical.


The domain of the Lognormal distribution lies between 0 to +.

• The mean, median, and mod values are not the same.

• In Lognormal distribution, the natural logarithm of the random


variable has a Normal distribution.
LOGNORMAL DISTRIBUTION
• The PDF and CDF of this distribution function are

[ [ ]]
2
1 1 ln 𝑥 −𝜇 𝑥
𝑓 𝑥 ( 𝑥)= exp −
𝜎 𝑥 𝑥 √2 𝜋 2 𝜎𝑥

( [ ])
𝑥 2
1 1 ln 𝑥 −𝜇 𝑥
𝐹 ( 𝑥 )= ∫
𝜎 𝑥 √2 𝜋 0
exp −
2 𝜎𝑥

𝐹 ( 𝑥 )=𝜑
[
ln ( 𝑥 ) − 𝜇 𝑥
𝜎𝑥 ]
LOGNORMAL DISTRIBUTION
EXAMPLE 5
Suppose that the reaction time in seconds of a person can be modeled by a
lognormal distribution with parameter values, = -0.35 and = 0.2.
Data: ln X = N (−0.35, 0.2)

a) Find the probability that the reaction time is less than 0.6 seconds
b) Find the reaction time that is exceeded by 95% of the population

Part A
EXAMPLE 5
Part B
EXPONENTIAL DISTRIBUTION
• This distribution is defined for a positive random variable x > xo > 0.

• The Exponential distribution is often used in reliability engineering as


it can represent system having a constant rate failure.

• The PDF depends on the constant parameter,  and is given by

0, x  xo
f x       x  x 
e 0
x  xo
• = Exponential parameter also known as failure rate.
• xo= an offset, which is assumed to be known a priori (the smallest value).
EXPONENTIAL DISTRIBUTION
• The CDF for the Exponential distribution is defined as

F x   1  exp    x  xo 
xo=0 when the
smallest data is
equivalent to zero.
Assumed zero if xo
is not mentioned in
the question.
• The mean and variance are given respectively by
1
  xo

1
 2

 2
EXPONENTIAL DISTRIBUTION
CONSTANT FAILURE RATE

• Failure rate is the frequency with which an engineered system or component


fails, expressed in failures per unit of time. It is usually denoted by the
Greek letter λ (lambda) and is often used in reliability engineering.

• Due to ease in dealing with a constant failure rate, the exponential distribution
function has proven popular as the traditional basis for reliability modeling
CONSTANT FAILURE RATE
EXAMPLES 6
On the average, a certain computer part lasts ten years. The length of time the
computer part lasts is exponentially distributed.

a) What is the probability that a computer part lasts more than 7 years?

b) On the average, how long would five computer parts last if they are used
one after another?

c) Eighty percent of computer parts last at most how long?

d) What is the probability that a computer part lasts between nine and 11
years?
EXAMPLES 6
a) What is the probability that a computer part lasts more than 7 years?

m=l

μ=10, so m =1/μ =1/10 = 0.10

P(x > 7) = 1 – P(x < 7).

Since P(X < x) = 1 –e–mx ,then P(X > x) = 1 –(1 –e–mx) = e-mx

P(x > 7) = e(–0.1)(7) = 0.4966.

The probability that a computer part lasts more than seven years
is 0.4966.
EXAMPLES 6
b) On the average, how long would five computer parts last if they are used one
after another?

On the average, one computer part lasts


ten years. Therefore, five computer
parts, if they are used one right after the
other would last, on the average, (5)
(10) = 50 years.
EXAMPLES 6
c) Eighty percent of computer parts last at most how long?

Find the 80th percentile. Draw the graph. Let k = the 80th
percentile.

Solve for k:
k=ln(1−0.80)/(−0.1)=16.1

Eighty percent of the computer parts last at most 16.1


years.
EXAMPLES 6
d) What is the probability that a computer part lasts between 9 and 11 years?

P(9 < x < 11) = P(x < 11) – P(x < 9)


= (1 – e(–0.1)(11)) – (1 – e(–0.1)(9))
= 0.6671 – 0.5934
= 0.0737.

The probability that a computer part lasts between 9 and 11


years is 0.0737.
EXAMPLES 7
For example, if a certain component is known to have a failure rate of 0.2
failures per one million hours (0.2 × 10−6), the reliability of this device over
a period of say 100 000 hours will be:

This means that the component will have a chance of survival over 100 000
hours of about 98%. Suppose that the producer of this component has
distributed 5000 units to different users, then Ns = Nt.R(t) = 5000 × 0.98 =
4901. Accordingly, about 98 components are likely to fail during the 100 000
hours.
WEIBULL DISTRIBUTION
• The Weibull distribution has been widely used to solve many engineering
problems.

• It has been accepted that this distribution is the most useful density
function for reliability estimations and also capable of application to
problems including defining strength of brittle materials, classifying
failure types, scheduling preventive maintenance and inspection activities.

• Moreover, it has become the most important distribution for life-testing


models such as time to failure for electrical components.

• The Weibull distribution is very flexible in matching a wide range of


phenomena.

• Unlike the Exponential distribution, the failure rate represented by the


Weibull distribution is not constant
WEIBULL DISTRIBUTION
• The PDF and CDF for the Weibull distribution are defined as:
d=0 when the
β x  δ    x  δ β 
β 1
f x x  
smallest data is
exp     equivalent to zero.
θ   θ   Assumed zero if d
is not mentioned in
the question.

  x    
F x   1  exp    
    

 = shape parameter (0<<).


 = scale parameter (0<<).
 = location parameter (-<<).
WEIBULL DISTRIBUTION
• The flexibility of the Weibull distribution in representing
any random variable means that it can be identical or
similar to the several others of common distributions.

=1, identical to the Exponential distribution


=2, identical to the Rayleigh distribution.
=2.5, similar to the Lognormal distribution.
=3.6, similar to the Normal distribution.
=5, similar to the peaked Normal distribution.
WEIBULL DISTRIBUTION
• The mean and variance for the Weibull distribution can be
calculated using all three parameters (,  and ) by using
the following equations
 1
     1  
 

  2 2 1 
   1     1  
2 2

     
 1
x   1   Gamma function
 
WEIBULL DISTRIBUTION
WEIBULL FAILURE RATE
WEIBULL FAILURE RATE
• Weibull distributions with β < 1 have a failure rate that decreases
with time, also known as infantile or early-life failures.

• Weibull distributions with β close to or equal to 1 have a fairly


constant failure rate, indicative of useful life or random failures.

• Weibull distributions with β > 1 have a failure rate that increases


with time, also known as wear-out failures.

• These comprise the three sections of the classic "bathtub curve."


A mixed Weibull distribution with one subpopulation with β < 1,
one subpopulation with β = 1 and one subpopulation with β > 1
would have a failure rate plot that was identical to the bathtub
curve.
WEIBULL FAILURE RATE

In reliability engineering, the cumulative distribution function corresponding


to a bathtub curve may be analysed using a Weibull chart.
EXAMPLES 8 This is similar to shape parameter,
b from previous equation

This is similar to inverse


scale parameter, 1/q from
previous equation
EXAMPLES 9
The lifetime (in hundreds of
hours) of a certain type of
vacuum tube has a Weibull
distribution with parameters
α=2 (shape) and β=3 (Scale) .
Compute the following:

a. E(X) and V(X)

b. P(X<5)

c. P(1.8<X<6)

d. P(X>3)

This is similar to shape parameter,


b from previous equation
This is similar to scale parameter, q
from previous equation
EXAMPLES 9
This is similar to shape parameter, b
from previous equation

a. E(X) and V(X)

This is similar to scale


parameter, q from
previous equation
EXAMPLES 9
b. P(X<6) c. P(1.8<X<6)
EXAMPLES 9
d. P(X>3)
EXAMPLES 10
Assume that the life of a packaged magnetic disk exposed to corrosive gases has a
Weibull distribution with scale parameter, q = 300 hours and shape parameter, β = 0.5.
Calculate the probability that,

a. a disk lasts at least 600 hours,


b. a disk fails before 500 hours.
PARAMETER ESTIMATION
• Distribution functions are defined by its distribution
parameters.

• It involves from simple to intricate methods to estimate a


particular parameter accurately.

• These sections will emphasises more on two chosen


methods widely used to estimate the parameters of the
probability distribution. The methods are Probability
plotting and Maximum Likelihood Estimator (MLE).
PROBABILITY PLOTTING
• Probability plotting provides a graphical picture as well as
quantitative estimations on how well the distribution fits
the data.

• Furthermore, it is an extremely useful technique for data


with relatively small sample sizes.

• The probability plot method is based on the linear


equation principle.

• The equation of the corresponding CDF is transformed to


a linear form in order to plot a straight-line graph
PROBABILITY PLOTTING
• An example of probability plotting application is shown for the
Weibull distribution.

• By taking the logarithm of the Weibull CDF for twice, a linear


curve is defined.

• Assuming that the Location parameter is zero during the initial


stage:
  x  
F x   1  exp    
    
PROBABILITY PLOTTING
• The CDF equation then will be transformed to linear form as follows

  x  
exp      1  F x 
    


x
    ln 1  F x 
 
• The right side is written in positive form. Thus

x  1 
   ln 
   1  F x  
PROBABILITY PLOTTING
• By having the second logarithm, the new can be written as

 1   x   
ln ln    ln   
 1  F x      

• The right side of the above equation is then transformed into a linear
equation as follows
 1 
ln ln     ln x    ln 
 1  F x  
• This is now a straight line equation where ln[ln(1/(1-F(x))] is the
dependent variable, ln(x) is the independent variable,  is then the slope
and ln(x) is the y-axis intercept
PROBABILITY PLOTTING
where:
c = - ln()
m = 
x = ln(x)
 1 
ln ln  
y =  1  F x  
• From the constructed graph, the shape parameter can be determined according to the
slope value of the straight line. The scale parameter can be calculated based on

c
  exp  c = y-axis intercept
  
PROBABILITY PLOTTING
y =  1 
ln ln  
 1  F x  

F(x) = Probability (0-1)

n = total frequency
PROBABILITY PLOTTING
3.0000000

y = 1.9312x - 2.8625 2.0000000


2
R = 0.9835
1.0000000

0.0000000
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5
ln[ln(1/1-F(t))]

-1.0000000

-2.0000000

-3.0000000

-4.0000000

-5.0000000

-6.0000000

-7.0000000
ln(t-l)
MAXIMUM LIKELIHOOD ESTIMATOR
• In statistics, maximum-likelihood estimation (MLE) is a
method of estimating the parameters of a statistical model.

• When applied to a data set and given a statistical model,


maximum-likelihood estimation provides estimates for the
model's parameters.

• Maximum Likelihood Estimator (MLE) is an iterative


calculation method where the process of parameter estimation
has to be carried out repeatedly until the required parameter is
satisfied.

• It is a complicated procedure compared to the graphical


Probability plotting.
MAXIMUM LIKELIHOOD ESTIMATOR
• The basic idea behind MLE is to obtain the most likely values of the
parameters, for a given distribution, that will best describe the data.

• For an independent and identically distributed sample, this joint


density function is

f x1 , x2 ......., xn    f x1   .......  f xn  


• Now we look at this function from a different perspective by
considering the observed values x1, x2, …, xn to be fixed
"parameters" of this function, whereas θ will be the function's
variable (distribution parameter) and allowed to vary freely; this
function will be called the likelihood
MAXIMUM LIKELIHOOD ESTIMATOR

L ; x1 , x2 ....., xn   f x1 x1 , x2 ....., xn  

n
L xi    f xi  
i 1

where:
f(xi) = probability density function (PDF).
n = total number of observed data.
xi = observed data for observation order, i.
MAXIMUM LIKELIHOOD ESTIMATOR
• Example: The likelihood function for Exponential distribution is
denoted as
n
L     exp  x
i 1

• where offset value, xo is assumed zero and the log-likelihood function


n
ln L    ln   xi 
i 1

n
ln L   n log     xi
i 1
MAXIMUM LIKELIHOOD ESTIMATOR
• The equation can also be written down in a simplified form as
below

ln L   n ln    x 
• where
1 n
x   xi  sample average
n i 1
• By differentiating the above equation, the maximum likelihood estimator
equation of  can be performed as

 ln L 1 
 n  x 
  xi 
MAXIMUM LIKELIHOOD ESTIMATOR
• If
 ln L
0 * to estimate l that maximizes L

• then

1
 * l is the inverse of average

x
GOODNESS-OF-FIT TEST
• Even though the parameters of a probability distribution can be
estimated using the MLE or probability plotting, it does not mean that
the distribution ultimately represents the data population accurately as
the distribution type is determined based on the shape of plotted
histogram of the population data.

• To determine whether the defined distribution with the estimated


parameters is a sensible model for the given observed data, the
goodness-of-fit test should be carried out.

• The test determines the accuracy which the chosen distribution fits the
data population.

• There are many goodness-of-fit tests that can be applied such as Chi-
square test, Kolgomorov-Smirnov test, Graphical test, Hollander-
Proschan test, Mann-Scheuer-Fertig test and many more.
GOODNESS-OF-FIT TEST
• Only the Chi-square and Graphical tests will be discussed
in this section.

• The Graphical test is the simplest test compared to the


others.

• The Chi-square test is one of the most popular tests in


defining the type of probability distribution but it is not
sufficient for a small sample of data.

• It usually requires about 25 data or more. For a small


number of sample data, the Hollander-Proschan test is
preferred
CHI-SQUARE TEST
• Chi-square value denoted as 2 can provide a good test for the supposed
distribution fits the real one.

• The observed data can be grouped into class interval and observed
frequency, O.

• Suppose that for a group of observation data, a distribution can be


specified by making the hypothesis based on the histogram shape.

• For each class of the grouped data, the expected frequency for each
class can be estimated on the basis of the hypothecal distribution.

• The test can be carried out by multiplying the reliability density


function of the hypothesis distribution for each class interval with
number of data, n to obtain the expected frequency, E. The 2 then can
be estimated for each class using the given formula
CHI-SQUARE TEST
2  
n
O  E 2
k 1 E
• All single values of 2 for each class can be summed up.

• The hypothesis that the actual distribution fits the theoretical


distribution can be verified by comparing the estimated 2 with the
critical value for the 2 statistic from Chi-square statistical table.

• If the critical value for the 2 statistics is less than the calculated
value, the hypothesis will be rejected.

• The 2 value from the statistical table can be determined based on a


given level of significance for example 5% or 0.05. This depends on
the degrees of freedom.
CHI-SQUARE TEST
• The 2 value from the statistical table can be determined based
on a given level of significance for example 5% or 0.05.

• This depends on the degrees of freedom which can be


calculated as follows

d g  kn  1
where:
2 = chi-square value.
dg = degree of freedom.
E = expected value.
kn = number of classes.
O = observed value
CHI-SQUARE GOODNESS OF FIT TEST

A chi-square goodness of fit test


determines if sample data
matches a population.
CHI-SQUARE GOODNESS OF FIT TEST
• Chi-Square goodness of fit test is a non-parametric test that is used to find out
how the observed value of a given phenomena is significantly different from the
expected value.

• In Chi-Square goodness of fit test, the term goodness of fit is used to compare
the observed sample distribution with the expected probability distribution.

• Chi-Square goodness of fit test determines how well theoretical distribution


(such as normal, binomial, or Poisson) fits the empirical distribution.

• In Chi-Square goodness of fit test, sample data is divided into intervals.

• Then the numbers of points that fall into the interval are compared, with the
expected numbers of points in each interval.
CHI-SQUARE GOODNESS OF FIT TEST
• The following example demonstrates the calculation of chi-square value, 2
for each bin based on Chi-square equation.

• The test statistic follows, approximately a chi-square distribution with


degrees of freedom, d = 4 (d=k-1) where k is the number of non-empty cells.

• Expected frequency, E was estimated by multiplying the probability from


Weibull CDF with observed frequency, O.

• The hypothesis of an underlying Weibull distribution for corrosion depth,


dC98 was accepted at significance levels of 0.05 or 5% where the total 2 value
of 6.963 was less then value of 9.488, taken form chi-square standard table
CHI-SQUARE GOODNESS OF FIT TEST
CHI-SQUARE TEST EXAMPLE

HYPOTHESIS

[ ( )]
β−1
β( 𝑥− δ) 𝑥−δ
β
𝑓 𝑥 ( 𝑥)= 𝛽
exp −
θ θ

[ ( )]
𝛽
𝑥 −𝛿
𝐹 ( 𝑥 )=1 −exp −
𝜃
PROBABILITY
β= 1.79 ,θ=15.7 PLOT
CHI-SQUARE TEST EXAMPLE

[ ( )]
β−1
β( 𝑥− δ) 𝑥−δ
β Class Probability Expected
𝑓 𝑥 ( 𝑥)= 𝛽
exp − 0-5 0.147586008 5.5
θ θ 5-10 0.246573692 9.1

[ ( )]
10-15 0.229921628 8.5
𝛽
𝑥 −𝛿 15-20 0.168443234 6.2
𝐹 ( 𝑥 )=1 −exp −
𝜃 20-25
25-30
0.104450981
0.056626195
3.9
2.1

β= 1.79 ,θ=15.7
Example of Calculation
Probability (0<x<5)=0.1476
Total no. of data = 37
Expected frequency for 0<x<5 = 0.1476 x 37 = 5.5
CHI-SQUARE TEST EXAMPLE

Class Probability Observed (O) Expected (E) c2=(O-E)2/E


0-5 0.147586008 6 5.5 0.0533
5-10 0.246573692 10 9.1 0.0843
10-15 0.229921628 8 8.5 0.0302
15-20 0.168443234 6 6.2 0.0087
20-25 0.104450981 4 3.9 0.0047
25-30 0.056626195 3 2.1 0.3908
Total 0.5719

Degree of freedom = 6 classes -1 =5


OBSERVED VS EXPECTED FREQUENCY
Level of significance = 5% (0.05)
CHI-SQUARE TEST EXAMPLE
GRAPHICAL TEST
• Probability plotting discussed in the previous section can also be
used to measure approximately how well the data is represented
by the proposed distribution.

• The goodness-of-fit of the distribution can be measured using the


value of the coefficient of determination, R.

• The R value is based on how well the data fits to the straight line.

• If the line is straight enough with R value close to one, the


hypothesis can be reasonably accepted.

• In a real situation, the highest value of R equal to one is hardly


achievable.
EXTREME VALUE DISTRIBUTION
• Knowledge of the extreme value of the physical quantity can be very
important in structural assessments since this will lead to the highest
probability of this extreme variable in causing the failure to the system.

• It is well known that the extreme values are a compromise between the
critical capacity of engineering elements and their associated extreme
operating conditions.

• Two examples where knowledge of the extreme value is important are


discussed by Bailey [2000] who discusses the effect of the deepest
corrosion pits in a pipeline and the effect of maximum traffic load on
the service life of a bridge.

• It is also widely used in ocean engineering, pollution studies,


meteorology, fatigue strength and electrical strength of materials.
EXTREME VALUE DISTRIBUTION
• The probability distribution of extreme values can be determined by
considering the CDF of the largest value, Yn, expected in n observations. It
can be denoted by

Yn  max of x1 , x 2 , x3 , x 4 ..........x n
• The CDF of these extreme values can be written as

FYn  y   Fx  y 
n

Fx(y) = cumulative distribution function of y


EXTREME VALUE DISTRIBUTION
• The calculated CDF represents the exact probability distribution of
maximum values. The PDF of extreme values can be obtained by
differentiating the previous equation

f Yn  y   nFx  y  f x y 
n 1

• From this equation, an extreme PDF of the random variables with n


observations can be identified from its initial distribution
EXTREME VALUE DISTRIBUTION
f(x) n=2000

n=200

n=20

n=2

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy