Chapter 3 No Data Decision Problems
Chapter 3 No Data Decision Problems
Chapter 3 No Data Decision Problems
Chapter 3
No Data Decision Problems
Decision problems called statistical are those in which there
are data, or observation on the state of nature, hopefully
containing information that can be used to make a better
decision.
It will be useful to consider problems of making decisions in
the absence of data, not only because theses problems are
simpler, but also one approach to handling problems involving
data is to convert them to no data problem.
3.1 Introduction
The ingredients of a no data decision problem are the
triple ( , A , L) where
: the set of states of nature ;
A : the set of all available actions;
L : real valued function defined on A , in
which L( , a ) represents the loss incurred
when one takes action a and the state of nature
is in state .
will be referred to as the state space; A an action
space, and L the loss function.
Leong YK & Wong WY Introduction to Statistical Decisions 2
Example 3.1.2
Consider a decision problem with the following loss
table:
L( , a ) a1 a2 a3
1 5 3 5
2 0 3 4
Example 3.1.3
Consider a decision problem with the following loss
table:
L( , a ) a1 a2 a3
1 5 3 4
2 0 2 4
Leong YK & Wong WY Introduction to Statistical Decisions 4
3.2 Regret
If one knew the state of nature, he would immediately
know what action to take, namely the action for which the
loss is a minimum. But if one takes the action which does
not produce this minimum, he would regret not having
chosen the action that produces the minimum.
Lr ( , ai ) = L( , ai ) min L( , a )
a
Leong YK & Wong WY Introduction to Statistical Decisions 5
Regret table:
Lr ( , a ) a1 a2 a3
1 0 3 2
2 2 0 4
Leong YK & Wong WY Introduction to Statistical Decisions 6
Regret table:
Lr ( , a ) a1 a2 a3
1 2 0 2
2 0 3 4
Leong YK & Wong WY Introduction to Statistical Decisions 7
Lr ( , a ) a1 a2 a3
1 2 0 1
2 0 2 4
Leong YK & Wong WY Introduction to Statistical Decisions 8
Mixed Action
A mixed action for a problem with action space
A = { a1 ,L, an } is a probability vector
~
p = ( p1 ,L, pn ) , 0 pi 1, p1 + L + pn = 1.
~ a1 , L , an
p= ( p1 , L , pn )
p1 , L , pn
Leong YK & Wong WY Introduction to Statistical Decisions 9
a1 , a2 , L , an
a1 = ( 1,0,0, , 0)
1 , 0 ,L, 0
We denote the set of all mixed actions by A *. Note that
A can be imbedded in A * and be considered as a subset
of A *
Example 3.3.1
Suppose that the action space of the decision problem
consists of only two actins, say A = { a1 , a2 }. The mixed
action
Leong YK & Wong WY Introduction to Statistical Decisions 10
~
p = ( p ,1 p) , 0 p 1
~ a1, L , an
p= ( p1 , L , pn )
p1, L, pn
in a decision problem with (expected) loss
function L( , a ) is defined to be
L( , ~
p ) = in=1 pi L( , ai ) , .
M
L( m , ~
p ) = L( m , a1 ) p1 + L( m , a2 ) p2 + L + L( m , an ) pn
L(1, ~p ) p1 + 4 p2 + 3 p3
~ =
L( 2 , p ) 3 p1 + p2 + 5 p3
1 4 3
= p1 + p2 + p2
3 1 5
The loss point of all mixed action fills up the interior
(and the boundaries) of the triangle with the pure loss
points as its vertices.
Example 3.3.3
Consider a decision problem with the following loss
table:
L( , a ) a1 a2 a3
1 5 3 5
2 0 3 4
Leong YK & Wong WY Introduction to Statistical Decisions 13
L(1, ~p) 5 3 5
~ = p1 + p2 + p3
L( 2 , p ) 0 3 4
Example 3.3.4
Consider a decision problem with the following loss
matrix
a1 a2
1 0 1
2 6 5
The loss points of the two pure actions are the end points
of the line segment joining (0,6) and (1,5)
Leong YK & Wong WY Introduction to Statistical Decisions 14
Convex Set
A set of points is said to be convex if the
line segment joining each of its points is
contained entirely in the set. The convex
hull of a set A is the smallest convex
containing A.
Minimax Action
An action a ' A is said to be a pure minimax
action if
max L( , a ' ) = min max L( , a )
aA
A mixed action ~p * is said to be a
minimax mixed action if
max L( , ~
p*) = min max L( , ~
p)
~
p A*
Leong YK & Wong WY Introduction to Statistical Decisions 17
Example 3.4.1
Consider a decision problem with the following loss
table:
a1 a2 a3
1 4 5 2
2 4 0 5
max L( , a ) 4 5 5
Action a1 is the minimax pure action.
~
p = (0 , p , 1 p ) , 0 < p < 1.
The loss point of the minimax mixed action lies on the
line segment joining the loss point of a2 and a2
The loss point of the minimax mixed action lies on the
bisector ( 450 line ), and hence
L( , ~p ) = L( , ~
1p). (*)2
5 p + 2(1 p ) = 0 p + 5(1 p )
or p = 3 / 8.
Example 3.4.3
Consider a decision problem with loss table given by
a1 a2 a3 a4 a5
1 2 4 3 5 3
2 3 0 3 2 5
max L( , a ) 3 4 3 5 5
Question
Historical Note
The minimax regret criterion was developed by
statistician L.J. Savage.
L. J. Savage (1917~1971)
Example 3.4.4
Consider a decision problem with loss table given by
a1 a2 a3 a4 a5
1 2 4 3 5 3
2 3 0 3 2 5
Regret a1 a2 a3 a4 a5
1 0 2 1 3 1
2 3 0 3 2 5
Note that
min max L( , a*) min max L( , a ) .
aA* aA
Regret a1 a2 a3
1 2 0 1
2 0 2 4
Note that
Leong YK & Wong WY Introduction to Statistical Decisions 26
Example 3.4.6
Consider a decision problem with the following loss
table:
1 2 3
a1 0 3 5
a2 5 3 0
Let ~
p = ( p,1 p ) be a mixed action. Its loss function is
Leong YK & Wong WY Introduction to Statistical Decisions 27
L(1, ~p ) = 5(1 p )
L( 2 , ~
p) = 3 .
L( , ~
3 p) = 5 p
These loss functions are linear functions in p and are
plotted in the plane as follows:
1 2 3 4
a1 4 2 1 1
a2 0 1 5 2
Leong YK & Wong WY Introduction to Statistical Decisions 28
5
p ) = 4 = = 5 4 .
5 5
min max L( , ~
~
pA* A 8 2 8
Example 3.4.6
Consider a decision problem with the following loss
table:
1 2 3
a1 0 3 4
a2 4 3 0
Let ~
p = ( p ,1 p ) be a mixed action. Then
1 : 4(1 p )
2 : 3
3 : 4p
Leong YK & Wong WY Introduction to Statistical Decisions 30
Example 3.5.1
Consider the following decision problem with loss table
given by
a1 a2
Leong YK & Wong WY Introduction to Statistical Decisions 31
1 100 101
2 90 0
By minimax principle, action a1 is minimax. If the true
state of nature is 1, taking action a2 only incurs 1% loss
more that of action a1 . However, if 2 is the true state of
nature, taking action a2 is much more better than the use
of action a1.
Example 3.5.2
Consider a decision problem with loss table given by
a1 a1
1 0 1
2 6 5
Example 3.5.3
Consider a decision problem with loss table given by
a1 a2 a3 a4
1 6 5 2 3
2 1 2 5 4
a3a6 = 0
or
< w ,1 w > < L(1, a6 ) L(1, a3 ) , L( 2 , a6 L( 2 , a3 ) > = 0
or L( , a6 ) = L( , a3 ) .
L( , a6 ) < L( , a4 )
Recall that
Lr ( , a ) = L( , a ) min L( , a ' )
a 'A
= L( , a ) k ( ) , say
Therefore,
Lr ( , a ) = L( , a ) k ( ) P ( = ) .
Since Lr ( , a ) differs from L( , a ) by a term that does
not involve action a , this shows that there is no
difference in using loss or regret under Bayes principle.
Leong YK & Wong WY Introduction to Statistical Decisions 38
Example 3.5.4
Consider the decision problem with the following loss
table:
a1 a2 a3
1 2 5 3
2 3 1 5
Let the prior probabilities of be
: P( = 1) = w ; P( = 2 ) = 1 w
The Bayes loss of the (pure) actions are
L( , a1) = 2 w + 3(1 w) = 3 w
L( , a2 ) = 5w + 1(1 w) = 1 + 4w
L( , a3 ) = 3w + 5(1 w) = 5 2 w
The graphs of these lines are shown as below:
Leong YK & Wong WY Introduction to Statistical Decisions 39
a2 , w 2/5
a = .
a1 , w 2/5
Dominance
Action a ' is said to dominate action a if
L( , a ' ) L( , a) for all .
If the above inequality is strict for some
is strict, then action a is said to be
inadmissible.
Action which is not inadmissible is called
admissible action.
Example 3.6.1
Consider a decision problem with loss points of actions
displaced in the following figure:
: P( = 1) = w , P( = 2 ) = 1 w
Leong YK & Wong WY Introduction to Statistical Decisions 43
0 : P( = 1) = 2 / 5 , P( = 2 ) = 3 / 5 .
From the statisticians viewpoint, this is the worst case to
deal with the nature, and he will call this prior
distribution, the least favorable prior distribution.
Because of posing this prior distribution, the minimum
loss incurred by the statistician will be maximized.
Leong YK & Wong WY Introduction to Statistical Decisions 44