EPM944 Lecture2 v2 Copy

Download as pdf or txt
Download as pdf or txt
You are on page 1of 62

EPM944: Managing Risk and Uncertainty

Lecture 2: Structure of uncertainty and risk -


Review of Probability theory

Professor George Halikias

School of Mathematics, Computer Sciences and Engineering


City University London
Probability of risky events

I The best way to start thinking about risky events is to use the
tools of probability theory.
I Let A denote an event that “the loss of an investment is
greater than 1 million pounds” and the probability of the
event is 10%.
I In practice, we often say the “risk” of event A is 10%.
I This risk must be distinguished from the value
1, 000, 000 × 10% = 100, 000, which is the statistical average
of loss above 1 million!
Probability theory: Sample Space and Events

I Random Experiment: Process of observations whose outcome


cannot be predicted.
I Sample space Ω: The set of all possible outcomes of a
random experiment. This can be discrete or continuous.
I Sample point: Each outcome of a random experiment (i.e.
any member of Ω)

Example: Consider the random experiment of tossing a coin twice.


The sample space is: Ω = {HH, HT, TH, TT} where, e.g. HT
means first toss results in Heads and second toss results in Tails.
Algebra of sets

I A set A is a subset of a set B, i.e. A ⊆ B if every element of


A is an element of B. A = B if A ⊆ B and B ⊆ A.
I Complementation: Let A ⊆ Ω. Then
Ac = {ζ : ζ ∈ Ω and ζ ∈ / A}.
I Union: A ∪ B = {ζ : ζ ∈ A or ζ ∈ B}. This can be
generalised to the union of any finite (or infinite) number of
sets.
I Intersection: A ∩ B = {ζ : ζ ∈ A and ζ ∈ B}.
I Null (empty) set: The set ∅ containing no elements.
I Two sets A and B are called disjoint if A ∩ B = ∅.
Set Complementation

Figure: Complement of set A in S, Ac


Union of two sets

Figure: Union of A and B, A ∪ B


Intersection of two sets

Figure: Intersection of A and B, A ∩ B


Laws of union and intersection operators

I Commutativity: A ∪ B = B ∪ A, A ∩ B = B ∩ A.
I Associativity: A ∪ (B ∪ C ) = (A ∪ B) ∪ C ,
A ∩ (B ∩ C ) = (A ∩ B) ∩ C .
I Distributivity: A ∩ (B ∪ C ) = (A ∩ B) ∪ (A ∩ C ),
A ∪ (B ∩ C ) = (A ∪ B) ∩ (A ∪ C ).
I De-Morgan laws: (A ∪ B)c = Ac ∩ B c , (A ∩ B)c = Ac ∪ B c .
Every subset of the sample space Ω is an event:
I Ω = the certain event, ∅ = the impossible event.
I Ac = the event that A did not occur.
I A ∪ B = the event that either A or B occurred.
I A ∩ B = the event that both A and B occurred.
I i=1 Ai = the event that at least one Ai occurred.
S

I i=1 Ai = the event that all Ai occurred.


T
Probability axioms

A probability measure is a function which assigns real numbers to


events in a sample space Ω, according to the following rules
(axioms):
I Axiom 1: P(A) ≥ 0.
I Axiom 2: P(Ω) = 1.
I Axiom 3: P(A ∪ B) = P(A) + P(B) if A ∩ B = ∅.
If Ω is an infinite set, axiom 3 is modified as follows:
I Axiom 3’: If Ai ∩ Aj = ∅ for i 6= j, then:

∞ ∞
!
[ X
P Ai = P(Ai )
i=1 i=1
Elementary properties of probability

I P(Ac ) = 1 − P(A): A and Ac are disjoint sets and


A ∪ Ac = Ω. Apply axioms 2 and 3.
I P(∅) = 0: P(∅) = P(Ωc ) = 1 − P(Ω) = 0 (from axiom 2 and
previous property).
I P(A) ≤ P(B) if A ⊆ B: Write B = A ∪ (B ∩ Ac ) and apply
axioms 1 and 3.
I P(A) ≤ 1: From last property, P(A) ≤ P(Ω). Apply axiom 2.
I P(A ∪ B) = P(A) + P(B) − P(A ∩ B): Write
A ∪ B = A ∪ (Ac ∩ B) and B = (A ∩ B) ∪ (Ac ∩ B). Apply
axiom 3 and eliminate P(Ac ∩ B).
Event Risk and Quantity Risk

I Event risk has a simple yes/no form. They are quantified by a


probability measure of a simple event. Examples:
I What is the risk that a particular company goes bankrupt?
I What is the risk that a new drug fails to pass its safety checks?
I Quantity risk relates to a value which can vary (typically a
monetary value). They are modelled by random variables.
Examples:
I What is the risk of losses in an investment project?
I What is the risk of a high cost in a construction project?
I Quantity risks can be converted to event risks by adding some
sort of hurdle: rather than asking about losses in general we
may ask about the risk of losing more than 500, 000.
Union risk and Intersection risk

I Union risk: This occurs when there are a number of different


failure paths, each of which leads to the same outcome. E.g.
there are many different things that can go wrong causing
individually a failure in a rocket launch just before takeoff.
Probability of failure is the probability that one or more of the
events takes place.
I Intersection risk occurs when several occurrences combine to
give a bad outcome. E.g. A power cut at the same time as
the failure of an emergency generator leaves a hospital
without power.
I Refining risk estimate: Can we get a more accurate estimate
of the probability of a risk from the knowledge that another
even has occured? E.g Estimating the risk of having a disease
if a medical test has come out positive or negative.
Venn diagram
An event is a subset of the set of all possible realisations (sample
space). Venn diagrams are the natural way to represent these sets.

A  B C

B
C
Probability of union events

P(A ∪ B ∪ C ) = P(A) + P(B) + P(C )


−P(A ∩ B) − P(A ∩ C ) − P(B ∩ C )
+P(A ∩ B ∩ C )

A  B C

B
C
Comments

I This is to find the area covered by the union of the sets A, B


and C in the Venn diagram.
I If we add up probabilities of A, B and C , we double count the
intersections of any two sets.
I This is corrected by terms 4-6.
I However now intersection of all three sets is not included so it
needs to be added.
I Now each disjoint set in Venn diagram is included exactly
once!
Example 2.1
The three main causes of failure are: A = failure in fuel ignition,
P(A) = 0.001. B = failure of first stage separation,
P(B) = 0.0002; and C = failure in control systems,
P(C ) = 0.003. A, B and C are all independent. What is the
overall probability of failure?

P(A ∪ B ∪ C ) = 0.001 + 0.0002 + 0.003 − 0.001 × 0.0002


−0.0002 × 0.003 − 0.001 × 0.003
+0.001 × 0.0002 × 0.003 = 0.004196,

which is virtually equal to P(A) + P(B) + P(C ) = 0.0042


Probability of union of n events

For n events A1 , A2 , . . . , An :

X X
P (∪ni=1 Ai ) = P(Ai ) − P (Ai1 ∩ Ai2 ) + . . .
i≤n i1 <i2
X
+ . . . + (−1)n

P Ai1 ∩ Ai2 ∩ . . . ∩ Ain−1
i1 <i2 <...<in−1
n+1
+ (−1) P (A1 ∩ A2 ∩ . . . ∩ An )
Probability of intersection risk

Let A and B denote two events. Then

P(A ∩ B) = P(A)P(B|A) = P(B)P(A|B).

Here P(A|B) is called conditional probability, that is, probability of


A given B. Likewise P(B|A) is the probability of B given A.
I If P(A|B) = P(A) or P(B|A) = P(B), then A and B are said
to be independent.
I When A and B are independent, P(A ∩ B) = P(A)P(B).
I Consider the random experiment of rolling a die with sample
space Ω = {1, 2, 3, 4, 5, 6}. Let A1 = {ζ ∈ Ω : ζ is even} and
A2 = {ζ ∈ Ω : ζ > 3}. Are A1 and A2 independent? Why
not?
Conditional Probability

Figure: P(A|B) – Re-normalize over new sample space B


An example: independent case

I Independence is a way of saying that the event sets are not


aligned.
I The probability of 2 independent events occurring together is
the product of the probabilities.
I Hospital power supply: Suppose that in any given day prob
power cut = 0.0005 and prob that generator fails = 0.002.
I If these are independent, the probability of a power supply
failure in any given day is 0.0005 × 0.002 = 0.000001.
I This gives the approximate probability 0.000365 for (at least)
a failure in a year. Question: Why is this an approximation?
An example: independent case (cont.)

We can calculate this by asking: what is the probability of no


failure in a year?
The probability of no failure on each day in one year period is

(1 − 0.000001)365 = 0.999635066

Therefore, the probability of one failure in a year is

1 − 0.999635066 = 0.000364934
How to avoid exclusion and inclusion in computing union
risk?

I There is a trick to use complements. For any event A, we can


define Ac to be the event that A does not occur.
I P(Ac ) = 1 − P(A)
I P((A ∪ B)c ) = P(Ac ∩ B c )
I P((A ∪ B ∪ C )c ) = P(Ac ∩ B c ∩ C c )
I If A, B and C are independent, then Ac , B c and C c are also
independent, consequently

P(Ac ∩ B c ∩ C c ) = P(Ac )P(B c )P(C c )


= (1 − P(A))(1 − P(B))(1 − P(C ))
Revisiting Example 2.1

I P(Ac ) = 1 − P(A) = 1 − 0.001 = 0.999


I P(B c ) = 1 − P(B) = 1 − 0.0002 = 0.9998
I P(C c ) = 1 − P(C ) = 1 − 0.003 = 0.997
I P(Ac )P(B c )P(C c ) = 0.999 × 0.9998 × 0.997 = 0.995804
I P((A ∪ B ∪ C )c ) = P(Ac ∩ B c ∩ C c ) = 0.995804
I Hence:

P(A ∪ B ∪ C ) = 1 − P((A ∪ B ∪ C )c )
= 1 − 0.995804
= 0.004196
A generalization

Theorem 2.1
Let Ai , i = 1, · · · , n be events. Then
n n
! !
[ \
P Ai = 1 − P Aci .
i=1 i=1

Proof: Follows from the fact that


n
!c n
!
\ [
Aci = Ai
i=1 i=1


Random variables (revision)

I Consider a random experiment with sample space Ω.


I A random variable X (ζ) : Ω → R.
I The sample space Ω is called the domain of the r.v. X and
the set of all values X (ζ), ζ ∈ Ω is the range of X .
I Example: Consider the random experiment of tossing a coin
once. We can define a r.v. X as: X (H) = 1 and X (T ) = 0
where H stands for Heads and T stands for Tails.
I Events defined by R.V.’s: If X is a r.v. and x is a fixed
number we can define the event: {X = x} = {ζ : X (ζ) = x};
similarly {X ≤ x} = {ζ : X (ζ) ≤ x}, etc.
I These events have probabilities, e.g.
P{X ≤ x} = P({ζ : X (ζ) ≤ x}).
Random variables

Figure: Random variable X : S = {H, T } → {0, 1} ⊆ R


Example

I Consider the experiment of tossing a fair coin 3 times.


I The sample space consists of eight equally likely events:
Ω = {HHH, HHT , . . . , TTT }.
I Let X is a r.v. giving the number of Heads obtained. What is
P{X < 2}?.
I We have A = {X < 2} = {HTT , THT , TTH, TTT } and
P(A) = 21 since all 8 events in Ω are equally likely.
Distribution functions

I The cumulative distribution function (cdf) of a r.v. X is:


FX (x) = P{X ≤ x}, −∞ < x < ∞. Properties:
I 0 ≤ FX (x) ≤ 1 (FX (x) is a probability).
I FX (x1 ) ≤ FX (x2 ) if x1 < x2 ({X ≤ x1 } ⊆ (X ≤ x2 )). Thus
FX (x) is a non-decreasing function of x.
I limx→∞ FX (x) = FX (∞) = 1 ({X ≤ ∞} = Ω).
I limx→−∞ FX (x) = FX (−∞) = 0 ({X ≤ −∞} = ∅).
I FX (x) is right-continuous. (Since we took X ≤ x rather than
X < x in the definition of FX (x)).
Example

Consider again the experiment of tossing a fair coin 3 times.


Define X = number of Heads that come up. What is FX (x)?

x {X ≤ x} FX (x)
−1 ∅ 0
1
0 {TTT } 8
1
1 {TTT , TTH, THT , HTT } 2
7
2 {HHH}c 8
3 Ω 1
4 Ω 1
Example (cont.)

Figure: Cummulative distribution function FX (x)


Discrete Random Variables

I Let X be a r.v. with cdf FX (x). If FX (x) changes values only


in jumps (at most countable), then X is called a discrete r.v.
I Suppose the jumps in F (x) occur at points x1 , x2 , . . . where
x1 < x2 < . . .. Then,

F (xi ) − F (xi−1 ) = P{xi−1 < X ≤ xi } = P{X = xi } = pX (xi )

defines the probability mass function (pmf) pX (x)


(pX (xi ) = P{X = xi } if x = xi , pX (x) = 0 otherwise).
I Properties of mass function:
I 0 ≤ pX (xk ) ≤ 1 for k = 1, 2, . . .
I pX (x) = 0, x 6= xk for k = 1, 2, . . .
I
P
k pX (xk ) = 1
I FX (x) = P(X ≤ x) = x ≤x pX (xk )
P
k
CDF: Discrete Random Variables

Figure: CDF FX (x) of discrete RV’s is right-continuous


Mean and Variance

I The mean (or expected value) of a discrete r.v. X :


X
µX = E (X ) = xk pX (xk )
k

I The variance of X :
X
σX2 = Var (X ) = E {[X − E (X )]2 } = (xk − µX )2 pX (xk )
k

I Thus µX = E (X ) is a weighted average of the values of the


r.v. (weighted by the probability mass function). The variance
σX2 indicates the “spread” of the r.v around its mean.
I The standard deviation σX is square-root of the variance.
I Note that: σX2 = E (X 2 ) − [E (X )]2 .
Useful discrete distributions

I Bernoulli Distribution: X is a Bernoulli r.v. with parameter p


if its pmf is given by pX (0) = 1 − p, pX (1) = p, and
pX (k) = 0 otherwise (k ∈ Z). Thus the cdf FX (x) of X is:
FX (x) = 0 (x < 0), FX (x) = 1 − p (0 ≤ x < 1) and
FX (x) = 1 (x ≥ 1).
I The Bernoulli r.v. is associated with a random experiment
where the outcome is either a “success” (with probability p),
or a “failure” (with probability 1 − p).
I Properties: E (X ) = p, σX2 = p(1 − p).
Bernoulli distribution

Figure: PMF and PDF of Bernoulli random variable


Useful discrete distributions (cont.)

I Binomial Distribution: X is a binomial r.v. with parameters


(n, p) if its pmf is given 
pX (k) = P{X = k} = kn p k (1 − p)n−k for k ≥ 0. Here
0 ≤ p ≤ 1 and kn = k!(n−k)!n!

the binomial coefficient.
I The cdf FX (x) of X is: FX (x) = ki=0 ni p i (1 − p)n−i ,
P 

(k ≤ x < k + 1).
I The Binomial random variable describes the total number of
successes in a random experiment consisting of n independent
Bernoulli trials, each trial having probability of success p and
probability of failure q = 1 − p.
I Using the binomial expansion theorem it follows that
µX = E (X ) = np and σX2 = np(1 − p).
Binomial distribution

Binomial: N=10, p=0.3


0.35

0.3

0.25

0.2
pmf

0.15

0.1

0.05

0
0 1 2 3 4 5 6 7 8 9 10
k

Figure: PMF Binomial distribution N = 10, p = 0.3


Useful discrete distributions (cont.)

I Poisson Distribution: A r.v. X is called Poisson if its pmf is


k
given by: pX (k) = P{X = k} = e −λ λk! , k ≥ 0. The
k
corresponding cdf of X is: FX (x) = e −λ nk=0 λk! ,
P
n ≤ x < n + 1.
I The Poisson r.v. is a good approximation to the binomial r.v.
with parameters (n, p) when n is large and p is small enough,
so that λ = np has moderate size.
I Typical examples: (i) Number of telephone calls arriving at a
switching centre. (ii) Number of misprints in a page. (iii)
Number of customers entering a bank.
I Properties: E (X ) = σX2 = λ.
Poisson distribution

Figure: PMF Poisson random variable λ = 3


Continuous Random Variables

I Let X be a r.v. with cdf FX (x). If FX (x) is a continuous


function and also has a derivative which exists everywhere
(except possibly at a finite number of points) and is
piece-wise continuous, then X is called a continuous r.v.
I If X is a continuous r.v. then P{X = x} = 0.
I The function: fX (x) = dFdx
X (x)
is called the probability density
function (pdf) of X .
I Properties of pdf fX (x):
I fX (x) ≥ 0 (fX (x)dx is a probability).
I ∞ f (x)dx = 1 (P(Ω) = 1).
R
−∞
I P{a < X < b} = b f (x)dx = F (b) − F (a)
R
a
I The cdf of X can be recovered as:
Rx
FX (x) = P{X ≤ x} = −∞ fX (ξ)dξ.
CDF of Continuous random variable

Figure: Cumulative distribution function FX (x)


PDF of Continuous random variable

Figure: Probability density function fX (x)


Mean and Variance

I The mean (or expected value) of a continuous r.v. X is:


R∞
µX = E (X ) = −∞ xfX (x)dx.
I The variance of X is defined as:
Z ∞
σX2 = Var (X ) = E {[X − E (X )]2 } = (x − µX )2 fX (x)dx
−∞

I µX = E (X ) is the (continuous) weighted average of the values


of the r.v. (weighted by the probability density function).
I The variance σX2 indicates the “spread” of the r.v around its
mean.
I The standard deviation is the square-root of σX2 .
I Also σX2 = E (X 2 ) − [E (X )]2 .
Useful continuous distributions

I Uniform Distribution: X is a uniform r.v. over (a, b) if its pdf


1
is given by: fX (x) = b−a , a < x < b and fX (x) = 0 otherwise.
The corresponding cdf of X is: FX (x) = 0 for x ≤ a,
FX (x) = x−a
b−a for a < x < b and FX (x) = 1 for x ≥ b.
I A uniform r.v. is often used when we have no prior knowledge
of the actual pdf and all continuous values in some range
seem equally likely.
I Properties: E (X ) = b+a
2 which the mid-point of (a, b). Its
(b−a)2
variance is: σX2 = 12 .
Uniform distribution

Figure: PDF and CDF of uniform random variable


Useful continuous distributions (cont.)

I Exponential Distribution: X is an exponential r.v. with


parameter λ(> 0) if its pdf is given by: fX (x) = λe −λx for
x > 0 and fX (x) = 0 otherwise. The corresponding cdf of X
is: FX (x) = 1 − e −λx for x ≥ 0 and zero for x < 0.
I The most interesting property of the exponential distribution
is its “memoryless” property: If the lifetime of an item is
exponentially distributed, then an item that has been used for
some hours is “as good as new” with regard to the amount of
time remaining until it fails.
I Parameters: µX = E (X ) = λ1 , σx2 = λ12 .
Exponential distribution

Figure: PDF and CDF of exponential random variable


Useful continuous distributions (cont.)

I Normal (or Gaussian) Distribution: Perhaps the most


important distribution in Statistics. A r.v X is normal (or
2 2
1
Gaussian) if its pdf is: fX (x) = √2πσ e −(x−µ) /(2σ ) .
I The corresponding cdf of X is then:
Z x
1 2 2
FX (x) = √ e −(ξ−µ) /(2σ ) dξ
2πσ −∞
I The mean and variance of the normal distribution are
E (X ) = µ and Var (X ) = σ 2 respectively.
Normal distribution

Figure: PDF of normal RV (µ = 0, σ = 0.5, 1, 2)


Normal distribution

Figure: CDF of normal RV (µ = 0, σ = 1)


Quantity Risk: random variables

I Often there is a numerical value associated with the risk, e.g.


the risk that a stock price drops by a large amount.
I The probability that the largest of a set of random variables is
greater than x is a union risk: it is the probability that one or
more of these random variables is greater than x, i.e.

{ max {Xi > x}} = {X1 > x or . . . or Xn > x}


i=1,2,...,N

I This becomes an intersection risk by reversing things to find


the probability that all the random variables are less than x.

{ max {Xi > x}}c = {X1 ≤ x, X2 ≤ x, . . . , Xn ≤ x}


i=1,2,...,N
Quantity Risk: random variables (cont.)

I Let Xi denote the price of a particular stock at day i,


i = 1, · · · , N.
I Union risk is:  
P max Xi ≥ x .
i=1,2,...,N

(Probability that the price of the stock in at least one of the


days i = 1, i = 2, . . . , i = n is greater or equal to x).
I Intersection risk is:
 
P max Xi ≤ x .
i=1,2,...,N

(Probability that the price of the stock in every one of days


i = 1, i = 2, . . . , i = n is less than or equal to x).
Example 2.2
Suppose the probability of IBM stock dropping by more than 10%
on any one day is 0.01. Find the probability of event A = {on one
or more days in the next 20 days stock drops by more than 10%}.
I P(A) = P(max drop in the next 20 days > 10%).
I P(Ac ) = P(drop ≤ 10% every one of the next 20 days).
I P(A) = 1 − P(Ac ).
I The event {one day drop ≤ 10%} has probability 0.99.
I If prices on different days are independent:
P(A) = 1 − 0.9920 = 0.182.
Probability of union risk in terms of random variables

I Let X and Y be two random variables and U = max(X , Y ),


let FX (z), FY (z) and FU (z) denote the cumulative
distribution functions of X , Y and U.
I Then FU (z) = P({max(X , Y ) ≤ z}) = P({X ≤ z, Y ≤ z}).
I If X and Y are independent, then etc.

P({X ≤ z, Y ≤ z}) = P({X ≤ z})P({Y ≤ z}) = FX (z)FY (z).


A generalization
Let Xi , i = 1, · · · , N be independent and identically distributed
random variables with identical cumulative distribution FX (z). Let

U = max Xi .
i=1,··· ,N

Then:

FU (z) = P{U ≤ z} = P{X1 ≤ z, X2 ≤ z, . . . , XN ≤ z}


N
Y
= P{Xi ≤ z} = (P{X1 ≤ z})N = FX (z)N
i=1

Comment: FU (z) is known as the distribution of extreme sample.


It presents the probability of each random variable/sample Xi
falling below z.
Computing risk when events are dependent

Definition 1
The events {An } form a partition of the sample space Ω if the
following holds:
[
I An = Ω;
n
I Ai
T
Aj = ∅ for i 6= j.

Theorem 2.2
(Law of total probability) If the events {An } form a partition, then
X
P(B) = P(B|An )P(An ).
n
Total Probability

P
Figure: Total probability P(B) = n P(B|An )P(An )
Example 2.3
An engineering firm needs to make a decision as to whether to
develop a new product. Profit depends on whether there is
competition at the time when the product is released to the
market. The probability of having competition (s1 ) or no
competition (s2 ) depends on market demand. Let L= demand
low, H= demand high. The following table gives the
probabilities of H and L and the conditional probabilities of
having competition.

P(H) = 0.55 P(s1 |H) = 0.18 P(s2 |H) = 0.82


P(L) = 0.45 P(s1 |L) = 0.89 P(s2 |L) = 0.11

Calculate P(s1 ).
Example (cont.)

P(H) = 0.55 P(s1 |H) = 0.18 P(s2 |H) = 0.82


P(L) = 0.45 P(s1 |L) = 0.89 P(s2 |L) = 0.11

P(s1 ) = P (s1 ∩ (H ∪ L))


= P ((s1 ∩ H) ∪ (s1 ∩ L))
= P (s1 ∩ H) + P (s1 ∩ L)
= P(H)P (s1 |H) + P(L)P (s1 |L)
= 0.55 × 0.18 + 0.45 × 0.89
Example:
Suppose that a lab test to detect a certain disease has the
following statistics:
I A=event that the tested person has the disease.
I B=event that the test result is positive.
It is known that P(B|A) = 0.99, P(B|A) = 0.005 and that 0.1% of
the population actually has the disease. What is the probability
that a person has the disease given that the test result is positive?

P(A) = 0.999 then P(Ac ) = 0.999

The desired probability is P(A|B). Then:

P(B|A)P(A)
P(A|B) =
P(B|A)P(A) + P(B|Ac )P(Ac )
(0.99)(0.001)
= = 0.165
(0.99)(0.001) + (0.005)(0.999)
Using sampling to reduce uncertainty (cont.)

We can use Bayes’ formulae to calculate the probability of success


and failure after a good or bad report.
I P(S|G ) = P(G ∩ S)/P(G ) = 0.24/0.31 = 0.77
I P(F |G ) = P(F ∩ G )/P(G ) = 0.07/0.31 = 0.23
I P(S|B) = P(S ∩ B)/P(B) = 0.06/0.69 = 0.09
I P(F |B) = P(F ∩ B)/P(B) = 0.63/0.69 = 0.91
I These quantities are called posterior probabilities.
Comments

I The uncertainty of success or failure is significantly reduced


with sampling if the quality of samples are good.
I Of course, sampling will incur additional cost and this will be
discussed in decision analysis.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy