HW 3
HW 3
HW 3
Homework Assignment 3
Solutions
Spring 2022
Question 1. Consider the Markov chain whose transition probability matrix is given by
1 2 3 4
1 1 0 0 0
2 0 0.6 0.3 0.1
P =
3 0.1 0.4 0.5 0
4 0 0 0 1
(a) Determine the probability that the Markov chain ends in state 1, given that it starts
at state i, for i = 2, 3.
(b) Determine the mean time to absorption given that it starts at state i, for i = 2, 3.
(c) Assume that one wins a reward of 2 TL for each time the Markov chain visits state
2, and wins 3 TL for each time the Markov chain visits state 3. Starting from state 2,
compute the expected total reward until the end of the game.
Solution of Question 1. (a) Let T = min{n ≥ 0|Xn ∈ {1, 4}} and vi = P{XT =
1|X0 = i}
v1 =1
v2 = 0v1 + 0.6v2 + 0.3v3 + 0.1v4
v3 = 0.1v1 + 0.4v2 + 0.5v3 + 0v4
v4 =0
3
Solving this system of equations gives us v2 = 8
and v3 = 12 .
µ1 =0
µ2 = 1 + 0µ1 + 0.6µ2 + 0.3µ3 + 0.1µ4
µ3 = 1 + 0.1µ1 + 0.4µ2 + 0.5µ3 + 0µ4
µ4 =0
1
(c) With the given rewards, we update the equations as follows
µ1 =0
µ2 = 2 + 0µ1 + 0.6µ2 + 0.3µ3 + 0.1µ4
µ3 = 3 + 0.1µ1 + 0.4µ2 + 0.5µ3 + 0µ4
µ4 =0
Solving these equations gives µ2 = 23.75. Hence, with the given rewards, expected
total reward until the end of the game is 23.75 TL.
Question 2. Markov chains are used to model the completion time of projects that have
several stages. Each stage can take a short or a long time depending on the project team
capabilities. Consider a project with four stages corresponding to Preparation(P), Devel-
opment(D), Implementation (I) and Completion(C). To model the progress of this project,
a Markov chain on the state space E = {P, D, I, C} with the following transition matrix
between the states:
P D I C
P 0.9 0.1 0 0
D 0 0.99 0.01 0
I 0 0 0.95 0.05
C 0 0 0 1
(a) What is the expected amount of time it takes to complete the project?
(b) What is the probability that after three units of time, you have not yet finished the
Preparation stage?
Solution of Question 2. (a) Let T be the time it takes to reach state “C” and define
µx = E[T |X0 = x]. Then
µP = 1 + 0.9µP + 0.1µD
µD = 1 + 0.99µD + 0.01µI
µI = 1 + 0.95µI + 0.05µC
µC =0
Question 3. A red die is rolled a single time. A green die is rolled repeatedly. The game
stops the first time that the sum of the red and green die is either 4 or 7. What is the
probability that the game stops with a sum of 4?
2
Solution of Question 3. (a) Let W be the event that the sum is 4.
Let R = i be the event that we see i on the front face of the red die.
6
X
P(W ) = P(W |R = i) · P(R = i)
i=1
Note that
3
(b) Let W be the event that the sum is 4.
4
Let j be different than 4 and 7.
7
X
vj = P(XT = 4|X1 = j) = P(XT = 4|X2 = i, X1 = j)P(X2 = i|X1 = j)
i=2
X7
= P(XT = 4|X2 = i)P(X2 = i|X1 = j)
i=2
X7
= P(XT = 4|X2 = i)Pji
i=2
X7
= P(XT = 4|X1 = i)Pji
i=2
7
X 1
= vi
i=2
6
1
vj = (v2 + v3 + v4 + v5 + v6 + v7 ) for j ∈ {2, 3, 5, 6}
6
Observe that v2 = v3 = v5 = v6 . Then
1
vj = (vj + vj + v4 + vj + vj + v7 ) for j ∈ {2, 3, 5, 6}
6
1
vj = for j ∈ {2, 3, 5, 6}
2
We can conclude that
1 if i = 4
vi = 0 if i = 7
1
2
if i ̸= 4, 7
0 1 2
0 0.2 0.3 0.5
P = 1 0.1 0.5 0.4
2 0 0 1
Let X0 = 0. Derive the equations necessary to calculate the probability that the time the
process ends up in state 2 is divisible by three. You do not need to solve the equations.
6
′ ′ ′
v0 = 0.2v0 + 0.3v1 + 0.5v2
′ ′′ ′′ ′′
v0 = 0.2v0 + 0.3v1 + 0.5v2
′′
v0 = 0.2v0 + 0.3v1 + 0.5v2
′ ′ ′
v1 = 0.1v0 + 0.5v1 + 0.4v2
′ ′′ ′′ ′′
v1 = 0.1v0 + 0.5v1 + 0.4v2
′′
v1 = 0.1v0 + 0.5v1 + 0.4v2
v2 = 1
′
v2 = 1
′′
v2 = 1
a2 = 1
7
2
X
bj = P(T = 3k − 1|X0 = j) = P(T = 3k − 1|X1 = k, X0 = j)P(X1 = k|X0 = j)
k=0
X2
= P(T = 3k − 2|X0 = k)Pjk
k=0
X2
bj = ck Pjk for j ̸= 2
k=0
2
X
cj = P(T = 3k − 2|X0 = j) = P(T = 3k − 2|X1 = k, X0 = j)P(X1 = k|X0 = j)
k=0
X2
= P(T = 3k − 3|X0 = k)Pjk
k=0
X2
cj = ak Pjk for j ̸= 2
k=0
a0 + b 0 + c 0 = 1
a1 + b 1 + c 1 = 1
a2 + b 2 + c 2 = 1
a0 = b0 · 0.2 + b1 · 0.3
a1 = b0 · 0.1 + b1 · 0.5
a2 = 1
b0 = c0 · 0.2 + c1 · 0.3
b1 = c0 · 0.1 + c1 · 0.5
b2 = 0
c0 = a0 · 0.2 + a1 · 0.3 + a2 · 0.5
c1 = a0 · 0.1 + a1 · 0.5 + a2 · 0.4
c2 = 0
8
Question 5. A mouse is placed in a 3 × 3 box containing nine rooms as illustrated in the
figure below. At each step it leaves the room it is in by choosing at random one of the doors
out of the room.
(a) Model this process as a Markov chain with 9 states and write down the transition
probability matrix.
(b) Can you model this process as a Markov chain with 3 states using a symmetry argu-
ment?
Solution of Question 5. Let Xn be the number of the room that the mouse is in at the
nth step.
(a) Then Xn ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9} and (Xn )n≥1 is a Markov Chain with the state space
E = {1, 2, . . . , 9} and the transition probability matrix P
1 2 3 4 5 6 7 8 9
1 1
1 0 2
0 2
0 0 0 0 0
1 1 1
23 0 3
0 3
0 0 0 0
1 1
3 01 2
0 0 0 2
0 0 0
1 1
43 0 0 0 3
0 3
0 0
1 1 1 1
P = 50 4
0 4
0 4
0 4
0
1 1 1
60 0 3
0 3
0 0 0 3
1 1
70 0 0 2
0 0 0 2
0
1 1 1
8 0 0 0 0 3
0 3
0 3
1 1
9 0 0 0 0 0 2
0 2
0
Type I: The ones with two neighbors, the corner ones {1, 3, 7, 9}.
Type II: The ones with three neighbors, the edge ones {2, 4, 6, 8}.
Type III: The ones with four neighbors, the central one {5}.
9
1 2 3
I II I
4 5 6
II III II
7 8 9
I II I
Let Yn be the type of the room mouse is in at the nth step. Then Yn ∈ {I, II, III}
and (Yn )n is a Markov Chain with the state space E = {I, II, III} and the transition
probability matrix
I II III
I 0 1 0
Q = II 32 0 1
3
III 0 1 0
10
Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.
Alternative Proxies: