HW 3

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

IE 325: Stochastic Models

Homework Assignment 3
Solutions
Spring 2022

Question 1. Consider the Markov chain whose transition probability matrix is given by

1 2 3 4
 
1 1 0 0 0
2 0 0.6 0.3 0.1 
P =  
3 0.1 0.4 0.5 0 
4 0 0 0 1

(a) Determine the probability that the Markov chain ends in state 1, given that it starts
at state i, for i = 2, 3.

(b) Determine the mean time to absorption given that it starts at state i, for i = 2, 3.

(c) Assume that one wins a reward of 2 TL for each time the Markov chain visits state
2, and wins 3 TL for each time the Markov chain visits state 3. Starting from state 2,
compute the expected total reward until the end of the game.

Solution of Question 1. (a) Let T = min{n ≥ 0|Xn ∈ {1, 4}} and vi = P{XT =
1|X0 = i}

v1 =1
v2 = 0v1 + 0.6v2 + 0.3v3 + 0.1v4
v3 = 0.1v1 + 0.4v2 + 0.5v3 + 0v4
v4 =0

3
Solving this system of equations gives us v2 = 8
and v3 = 12 .

(b) Let µx = E[T |X0 = x]. Then

µ1 =0
µ2 = 1 + 0µ1 + 0.6µ2 + 0.3µ3 + 0.1µ4
µ3 = 1 + 0.1µ1 + 0.4µ2 + 0.5µ3 + 0µ4
µ4 =0

Solving this system of equations gives us µ2 = µ3 = 10.

1
(c) With the given rewards, we update the equations as follows

µ1 =0
µ2 = 2 + 0µ1 + 0.6µ2 + 0.3µ3 + 0.1µ4
µ3 = 3 + 0.1µ1 + 0.4µ2 + 0.5µ3 + 0µ4
µ4 =0

Solving these equations gives µ2 = 23.75. Hence, with the given rewards, expected
total reward until the end of the game is 23.75 TL.

Question 2. Markov chains are used to model the completion time of projects that have
several stages. Each stage can take a short or a long time depending on the project team
capabilities. Consider a project with four stages corresponding to Preparation(P), Devel-
opment(D), Implementation (I) and Completion(C). To model the progress of this project,
a Markov chain on the state space E = {P, D, I, C} with the following transition matrix
between the states:
P D I C
 
P 0.9 0.1 0 0
D 0 0.99 0.01 0 
I  0 0 0.95 0.05 
C 0 0 0 1

(a) What is the expected amount of time it takes to complete the project?

(b) What is the probability that after three units of time, you have not yet finished the
Preparation stage?

Solution of Question 2. (a) Let T be the time it takes to reach state “C” and define
µx = E[T |X0 = x]. Then

µP = 1 + 0.9µP + 0.1µD
µD = 1 + 0.99µD + 0.01µI
µI = 1 + 0.95µI + 0.05µC
µC =0

Solving this system of equations gives us µP = E[T |X0 = P ] = 130.


(3)
(b) P(P,P ) = 0.729

Question 3. A red die is rolled a single time. A green die is rolled repeatedly. The game
stops the first time that the sum of the red and green die is either 4 or 7. What is the
probability that the game stops with a sum of 4?

(a) Solve the question using probabilistic arguments.

(b) Solve the question using Markov Chains.

2
Solution of Question 3. (a) Let W be the event that the sum is 4.
Let R = i be the event that we see i on the front face of the red die.
6
X
P(W ) = P(W |R = i) · P(R = i)
i=1

Note that

P(W |R = 4) = P(W |R = 5) = P(W |R = 6) = 0


P(W |R = 1) = P(W |R = 1, G = 3)P(G = 3) + P(W |R = 1, G = 6)P(G = 6)
+ P(W |R = 1, G = 1, 2, 4, 5)P(G = 1, 2, 4, 5)
1 1 4
= · 1 + · 0 + · P(W |R = 1)
6 6 6
1
=
2

P(W |R = 2) = P(W |R = 2, G = 2)P(G = 2) + P(W |R = 2, G = 5)P(G = 5)


+ P(W |R = 2, G = 1, 3, 4, 6)P(G = 1, 3, 4, 6)
1 1 4
= · 1 + · 0 + · P(W |R = 2)
6 6 6
1
=
2

P(W |R = 3) = P(W |R = 3, G = 1)P(G = 1) + P(W |R = 3, G = 4)P(G = 4)


+ P(W |R = 3, G = 2, 3, 5, 6)P(G = 2, 3, 5, 6)
1 1 4
= · 1 + · 0 + · P(W |R = 3)
6 6 6
1
=
2
Thus,
6
X
P(W ) = P(W |R = i) · P(R = i)
i=1
= P(W |R = 1)P(R = 1) + P(W |R = 2)P(R = 2) + P(W |R = 3)P(R = 3)
1 1 1 1 1 1 1
= · + · + · =
2 6 2 6 2 6 4

3
(b) Let W be the event that the sum is 4.

P(W ) = P(W ∩ {R is 1, 2 or 3}) + P(W ∩ {R is 4, 5 or 6})


= P(W ∩ {R is 1, 2 or 3}) + 0
= P(W ∩ {R is 1, 2 or 3})
= P(W ∩ {R = 1}) + P(W ∩ {R = 2}) + P(W ∩ {R = 3})
= 3 P(W ∩ {R = 1})
= 3 P(W |{R = 1})P({R = 1})
1 1
= 3 P(W |{R = 1}) = P(W |{R = 1})
6 2
Let Xn denote the sum in the nth roll given that “Red is 1”. Then (Xn )n≥0 is a Markov
Chain with the states {2, 3, 4, 5, 6, 7} and one step transition probability matrix P
where
2 3 4 5 6 7
1 1 1 1 1 1 
2 6 6 6 6 6 6
1 1 1 1 1 1
3
 61 61 61 16 61 61 

4 6 6 6 6 6 6 

P =  1 1 1 1 1 1

5 6 6 6 6
1 1 1 1 1 1  6 6

6 6 6 6 6 6 6 

7 16 16 61 16 61 16
and with the initial distribution P(X1 = i) = 16 for i = 2, 3, · · · , 7.
Let T = min{n : Xn = 4 or Xn = 7}. We are interested in P(XT = 4).
Let vi = P(XT = 4|X1 = i) for i = 2, 3, . . . , 7 where v4 = 1 and v7 = 0 are boundary
conditions.
7
X
P(XT = 4) = P(XT = 4|X1 = i)P(X1 = i)
i=2
7
X 1
= vi where v4 = 1, v7 = 0.
i=2
6

4
Let j be different than 4 and 7.
7
X
vj = P(XT = 4|X1 = j) = P(XT = 4|X2 = i, X1 = j)P(X2 = i|X1 = j)
i=2
X7
= P(XT = 4|X2 = i)P(X2 = i|X1 = j)
i=2
X7
= P(XT = 4|X2 = i)Pji
i=2
X7
= P(XT = 4|X1 = i)Pji
i=2
7
X 1
= vi
i=2
6

1
vj = (v2 + v3 + v4 + v5 + v6 + v7 ) for j ∈ {2, 3, 5, 6}
6
Observe that v2 = v3 = v5 = v6 . Then
1
vj = (vj + vj + v4 + vj + vj + v7 ) for j ∈ {2, 3, 5, 6}
6

1
vj = for j ∈ {2, 3, 5, 6}
2
We can conclude that

 1 if i = 4
vi = 0 if i = 7
 1
2
if i ̸= 4, 7

P(W |{R = 1}) = P(XT = 4)


7
X
= vi P(X1 = i)
i=2
4 1 1 1
= · + ·1+ ·0
6 2 6 6
1
=
2
1
P(W ) = P(W |{R = 1})
2
1
=
4
5
Question 4. Consider the Markov chain (Xn )n whose transition probability matrix is given
by

0 1 2
 
0 0.2 0.3 0.5
P = 1 0.1 0.5 0.4 
2 0 0 1

Let X0 = 0. Derive the equations necessary to calculate the probability that the time the
process ends up in state 2 is divisible by three. You do not need to solve the equations.

Solution of Question 4. Let E be the state space of the Markov Chain.





 0, if MC is at state 0 at a time which is divisible by 3
 ′



 0, if MC is at state 0 at a time which gives a remainder 1 when divided by 3
 ′′


 0 , if MC is at state 0 at a time which gives a remainder 2 when divided by 3

1, if MC is at state 1 at a time which is divisible by 3




E= 1, if MC is at state 1 at a time which gives a remainder 1 when divided by 3
 ′′
1 , if MC is at state 1 at a time which gives a remainder 2 when divided by 3








 2, if MC is at state 2 at a time which is divisible by 3
 ′



 2, if MC is at state 2 at a time which gives a remainder 1 when divided by 3
2′′ ,

if MC is at state 2 at a time which gives a remainder 2 when divided by 3

Then the transition probability matrix is given as follows:


′ ′′ ′ ′′ ′ ′′
0 0 0 1 1 1 2 2 2
 
0 0 0.2 0 0 0.3 0 0 0.5 0

0   0 0 0.2 0 0 0.3 0 0 0.5 
′′

0  0.2 0
 0 0.3 0 0 0.5 0 0 
1  0 0.1 0 0 0.5 0 0 0.4 0 

 
P = 1  0 0 0.1 0 0 0.5 0 0 0.4 
′′
 
1   0.1 0 0 0.5 0 0 0.4 0 0 

2  0
 0 0 0 0 0 0 1 0 


2  0 0 0 0 0 0 0 0 1 
′′
2 0 0 0 0 0 0 1 0 0

Let T = min{n : Xn = 2}. We want to compute P(XT = 2|X0 = 0).


Let vi = P{XT = 2|X0 = i}. Then the first-step equations are as follows:

6
′ ′ ′
v0 = 0.2v0 + 0.3v1 + 0.5v2
′ ′′ ′′ ′′
v0 = 0.2v0 + 0.3v1 + 0.5v2
′′
v0 = 0.2v0 + 0.3v1 + 0.5v2
′ ′ ′
v1 = 0.1v0 + 0.5v1 + 0.4v2
′ ′′ ′′ ′′
v1 = 0.1v0 + 0.5v1 + 0.4v2
′′
v1 = 0.1v0 + 0.5v1 + 0.4v2
v2 = 1

v2 = 1
′′
v2 = 1

Alternative solution for Q4: Let T = min{n : Xn = 2}. We want to compute


P(T = 3k for some k|X0 = 0).

Let ai = P(T = 3k for some k|X0 = i) for i = 0, 1, 2


bi = P(T = 3k − 1 for some k|X0 = i) for i = 0, 1, 2
ci = P(T = 3k − 2 for some k|X0 = i) for i = 0, 1, 2

Note that a2 = 1 and b2 = c2 = 0 are boundary conditions. We have ai + bi + ci = 1 for


i = 0, 1, 2 since T is either of the form 3k or 3k − 1 or of the form 3k − 2.

a0 = P(T = 3k for some k|X0 = 0) = P(T = 3k|X1 = 0, X0 = 0)P(X1 = 0|X0 = 0)


+ P(T = 3k|X1 = 1, X0 = 0)P(X1 = 1|X0 = 0)
+ P(T = 3k|X1 = 2, X0 = 0)P(X1 = 2|X0 = 0)

a0 = P(T = 3k − 1|X0 = 0)P00 + P(T = 3k − 1|X0 = 1)P01 + 0 = b0 · P00 + b1 · P01


= b0 · 0.2 + b1 · 0.3

a1 = P(T = 3k for some k|X0 = 1) = P(T = 3k|X1 = 0, X0 = 1)P(X1 = 0|X0 = 1)


+ P(T = 3k|X1 = 1, X0 = 1)P(X1 = 1|X0 = 1)
+ P(T = 3k|X1 = 2, X0 = 1)P(X1 = 2|X0 = 1)

a1 = P(T = 3k − 1|X0 = 0)P10 + P(T = 3k − 1|X0 = 1)P11 + 0 = b0 · P10 + b1 · P11


= b0 · 0.1 + b1 · 0.5

a2 = 1

7
2
X
bj = P(T = 3k − 1|X0 = j) = P(T = 3k − 1|X1 = k, X0 = j)P(X1 = k|X0 = j)
k=0
X2
= P(T = 3k − 2|X0 = k)Pjk
k=0
X2
bj = ck Pjk for j ̸= 2
k=0

b0 = c0 P00 + c1 P01 = c0 · 0.2 + c1 · 0.3


b1 = c0 P10 + c1 P11 = c0 · 0.1 + c1 · 0.5
b2 = 0

2
X
cj = P(T = 3k − 2|X0 = j) = P(T = 3k − 2|X1 = k, X0 = j)P(X1 = k|X0 = j)
k=0
X2
= P(T = 3k − 3|X0 = k)Pjk
k=0
X2
cj = ak Pjk for j ̸= 2
k=0

c0 = a0 P00 + a1 P01 + a2 P02 = a0 · 0.2 + a1 · 0.3 + a2 · 0.5


c1 = a0 P10 + a1 P11 + a2 P12 = a0 · 0.1 + a1 · 0.5 + a2 · 0.4
c2 = 0

Therefore the following are the set of equations:

a0 + b 0 + c 0 = 1
a1 + b 1 + c 1 = 1
a2 + b 2 + c 2 = 1
a0 = b0 · 0.2 + b1 · 0.3
a1 = b0 · 0.1 + b1 · 0.5
a2 = 1
b0 = c0 · 0.2 + c1 · 0.3
b1 = c0 · 0.1 + c1 · 0.5
b2 = 0
c0 = a0 · 0.2 + a1 · 0.3 + a2 · 0.5
c1 = a0 · 0.1 + a1 · 0.5 + a2 · 0.4
c2 = 0

8
Question 5. A mouse is placed in a 3 × 3 box containing nine rooms as illustrated in the
figure below. At each step it leaves the room it is in by choosing at random one of the doors
out of the room.

(a) Model this process as a Markov chain with 9 states and write down the transition
probability matrix.

(b) Can you model this process as a Markov chain with 3 states using a symmetry argu-
ment?

Solution of Question 5. Let Xn be the number of the room that the mouse is in at the
nth step.

(a) Then Xn ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9} and (Xn )n≥1 is a Markov Chain with the state space
E = {1, 2, . . . , 9} and the transition probability matrix P

1 2 3 4 5 6 7 8 9
1 1
 
1 0 2
0 2
0 0 0 0 0
1 1 1
23 0 3
0 3
0 0 0 0
1 1
3 01 2
0 0 0 2
0 0 0
1 1
43 0 0 0 3
0 3
0 0
1 1 1 1
P = 50 4
0 4
0 4
0 4
0
1 1 1
60 0 3
0 3
0 0 0 3 
1 1
70 0 0 2
0 0 0 2
0
1 1 1 
8 0 0 0 0 3
0 3
0 3
1 1
9 0 0 0 0 0 2
0 2
0

(b) There are three types of rooms

ˆ Type I: The ones with two neighbors, the corner ones {1, 3, 7, 9}.
ˆ Type II: The ones with three neighbors, the edge ones {2, 4, 6, 8}.
ˆ Type III: The ones with four neighbors, the central one {5}.

9
1 2 3
I II I
4 5 6
II III II
7 8 9
I II I

Let Yn be the type of the room mouse is in at the nth step. Then Yn ∈ {I, II, III}
and (Yn )n is a Markov Chain with the state space E = {I, II, III} and the transition
probability matrix
I II III
 
I 0 1 0
Q = II  32 0 1 
3
III 0 1 0

10

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy