Homework 4 Solutions
Homework 4 Solutions
Homework Assignment 4
Solutions
Spring 2020
Question 1. A professor continually gives exams to her students. She can give three possi-
ble types of exams, and her class is graded as either having done well or badly. Let pi denote
the probability that the class does well on type i exam, and suppose that p1 = 0.3, p2 = 0.6,
and p3 = 0.9. If the class does well on an exam, then the next exam is equally likely to be
any of the three types. If the class does badly, then the next exam is always type 1. What
proportion of the exams are of type i, i = 1, 2, 3?
Solution of Question 1. Let Xn denote the type of exam in period n. Then (Xn )n≥0 is a
Markov Chain on state space E = {1, 2, 3}. The one-step transition probability matrix is:
1 2 3
1 0.8 0.1 0.1
P = 2 0.6 0.2 0.2
3 0.4 0.3 0.3
Note that the Markov chain is ergodic. The limiting probability distribution π = (π1 , π2 , π3 )
is the solution of:
Question 2. Two urns A and B contain a total number of k balls. In each step, urn B is
chosen with probability p (0 < p < 12 ), and urn A is chosen with probability 1 − p. Then a
ball is selected from the chosen urn and placed in the other urn. If urn A becomes empty
we transfer 0 balls from urn A to urn B with probability 1 − p, and 1 ball from urn B to A
with probability p.
(a) Model this process as a Markov chain.
1
(c) Does the Markov chain have a limiting distribution? If so, find the limiting distribution.
Solution of Question 2. (a) Let Xn be the number of balls in urn A at time n, the number
of balls in urn A is only dependent to the previous step, therefore it is Markov Chain process.
And the state transition diagram for this MC will be:
(b)
0 1 2 3 ... k − 1 k
4
0 1−p p 0 0 0
... 0 0
1 1 − p p 0 0 0
... 0 0
1−p
2 0 0 p 0
... 0 0
P = 3 0 0 1−p 0 p
... 0 0
. .. .. .. ..
.. .. ..
.. . . . .. . .
k 0 0 0 0 0 ... p − 1 p
(c) This chain is irreducible since all states communicate with each other. It is also ape-
riodic since it includes a self-transition, P00 > 0. Let’s write the equations for a station-
ary distribution. For state 0, we can write π0 = (1 − p)π0 + (1 − p)π1 , which results in
p
π1 = 1−p π0 . For state 1, we can write π1 = pπ0 + (1 − p)π2 = (1 − p)π1 + (1 − p)π2 ,
p
which results in π2 = 1−p π1 . Similarly, for any j ∈ 1, 2, . . . , k, we obtain πj = απj−1 ,
p
where α = 1−p . Note that since 0 < p < 1/2, we conclude that 0 < α < 1. We obtain
Pk Pk
πj = αj π0 , for j = 1, 2, . . . , k. Finally, we must have 1 = j=0 πj =
j
j=0 α π0 (where
k+1
0 < α < 1) = 1−α 1−α
π0 (geometric series). Thus, π0 = 1−α 1−α
k+1 . Therefore, the stationary
1−α j
distribution is given by πj = 1−αk+1 α , for j = 0, 1, 2, . . . , k. Since this chain is irreducible
and aperiodic and we have found a stationary distribution, we conclude that all states are
positive recurrent and π = [π0 , π1 , . . .] is the limiting distribution.
Question 3. A machine, at any given day, can be in one of four different conditions: perfect,
good, average and critical.
• If the machine is in perfect condition, it stays in perfect condition the next day with
probability 0.7 and deteriorates into good condition with probability 0.3.
• If the machine is in good condition, it stays in good condition the next day with
probability 0.7 and deteriorates into average condition with probability 0.2. It breaks
down with probability 0.1.
• If the machine is in average condition, it stays in average condition the next day with
probability 0.7 and deteriorates into critical condition with probability 0.1. It breaks
down with probability 0.2.
• If the machine is in critical condition, it stays in critical condition the next day with
probability 0.6 and breaks down with probability 0.4.
2
Condition Cost
Perfect $10
Good $15
Average $20
Critical $50
If the machine is not in perfect condition then some defective items are produced. The
associated costs per day for each condition is given as follows: When the machine breaks
down, it is immediately replaced. Replacement of the machine costs $200.
(b) Show that the limiting probabilities exist. Find the limiting probabilities.
(d) Now consider a replacement policy where we replace the machine once it reaches critical
condition. Assume that in this case, if the machine breaks down, we pay a penalty
of $50 in addition to the replacement cost. Calculate the expected cost of this policy.
Compare it with your result in (c).
The set of states is E = {0, 1, 2, 3}. With the information given, following state
transition diagram is obtained: Transition probability matrix:
3
0 1 2 3
0 0.7 0.3 0 0
1 0.1 0.7 0.2 0
P =
2 0.2 0 0.7 0.1
3 0.4 0 0 0.6
(b) As observed from the state transition diagram, the Markov chain consists a single
communicating class of states. Thus it is irreducible.
For each state i, once the system leaves the state i, it is certain that the system will
visit state i after a finite units of time. Hence we have positive recurrence.
For each state i, once the system leaves the state i, it is probable to return to that
state in one unit of time (P(i,i) > 0). This information is sufficient to conclude that
the Markov chain is aperiodic.
A Markov chain which is irreducible, positive recurrent and aperiodic is known to
have a unique limiting probability distribution. Limiting probability distribution π =
[π0 , π1 , π2 , π3 ] is found by solving the following system of equations:
πP = π
X
πi = 1
i∈E
4
The set of states is Ẽ = {0, 0f, 1, 2}. State transition diagram:
0 0f 1 2
0 0.7 0 0.3 0
0f
0 0.7 0.3 0
P =
1 0 0.1 0.7 0.2
2 0.1 0.2 0 0.7
= $41.25
Since E[M̃ ] > E[M ], the previous policy incurs a lower average daily cost.
Question 4. Consider the Markov chains whose transition probability matrices are given
below. For each Markov chain, classify its states and determine if the Markov chain is
ergodic. What can you say about the limit behavior of each Markov chain?
5
(a)
0 1 2
0 0 0.5 0.5
Pa = 1 0.5 0 0.5
2 0.5 0.5 0
Solution:
• C = {0, 1, 2} (recurrent)
• Therefore Pa is ergodic.
• To find the limiting distribution, one needs to solve Π = Π · Pa and Π · 1 = 1.
0 0.5 0.5
Π1 Π2 Π3 = Π1 Π2 Π3 · 0.5 0 0.5
0.5 0.5 0
1 1
Π1 = Π2 + Π3
2 2
1 1
Π2 = Π1 + Π2
2 2
1 1
Π3 = Π1 + Π2
2 2
1 = Π 1 + Π2 + Π 3
1 1 1
Π1 = , Π2 = , Π3 = .
3 3 3
(b)
0 1 2 3
0 0 0 0 1
1 0
0 0 1
Pb =
2 0.5 0.5 0 0
3 0 0 1 0
6
Solution:
• C = {0, 1, 2, 3} (recurrent)
– Pb is irreducible since all the states communicate, it has a single communicating
class.
– All of the states are recurrent and since Pb is a finite state Markov Chain, all
recurrent states are positive recurrent.
– Pb is periodic with period 3.
• Therefore Pb is not ergodic.
• Limiting distribution does not exist.
(c)
0 1 2 3 4
0 0.5 0 0.5 0 0
1 0.25 0.5 0.25 0 0
Pc = 2
0.5 0 0.5 0 0
3 0 0 0 0.5 0.5
4 0 0 0 0.5 0.5
Solution:
• C1 = {0, 2} (recurrent)
• C2 = {1} (transient)
• C3 = {3, 4} (recurrent)
– Pc is reducible since all the states do not communicate, it has 3 communicating
classes.
7
– Since Pc is a finite state Markov Chain, all recurrent states are positive recurrent.
– Pc is aperiodic.
0 1 2 3 4
1 1
0 2 0 2
0 0
1 1
1
21 0 2
0 0
1
π = 2
2 0 2
0 0
1 1
3 0 0 0 2 2
1 1
4 0 0 0 2 2
8
(d)
0 1 2 3 4
0 0.25 0.75 0 0 0
1 0.5 0.5 0 0 0
Pd = 2
0 0 1 0 0
1 2
3 0 0 3 3
0
4 1 0 0 0 0
Solution:
• C1 = {0, 1} (recurrent)
• C2 = {2} (recurrent)
• C3 = {3} (transient)
• C4 = {4} (transient)
9
Let T = min{n ≥ 0|Xn ∈ C1 or Xn ∈ C2 }.
Note that P{XT ∈ C1 |X0 = 3} + P{XT ∈ C2 |X0 = 3} = 1.
P{XT ∈ C1 |X0 = 4} + P{XT ∈ C2 |X0 = 4} = 1.
Let νi = P{XT ∈ C2 |X0 = i}, then
νC1 = 0, νC2 = 1, ν4 = 0, ν3 = 23 ν3 + 31 νC2
ν3 = 1.
Starting from state 3, the Markov chain will end up in C1 and C2 with probabilities
0 and 1 respectively.
Starting from state 4, the Markov chain will end up in C1 and C2 with probabilities
1 and 0 respectively.
Hence π30 = π31 = 0
π40 = π00 = 25
π41 = π11 = 53
Hence we have,
0 1 2 3 4
2 3
0 5 5
0 0 0
2 3
1
5 5
0 0 0
π = 2
0 0 1 0 0
3 0 0 1 0 0
4 52 3
5
0 0 0
Question 5. Let (Nt )t be a Poisson process with rate λ > 0 and let Sn be the n’th arrival
time.
10
Solution:
(b) We have
(c) We have
P(N1 = 4, N3 = 4, N6 = 5) = P(N1 = 4, N3 − N1 = 0, N6 − N3 = 1)
= P(N1 = 4)P(N3 − N1 = 0)P(N6 − N3 = 1) (by independent incremen
= P(N1 = 4)P(N2 = 0)P(N3 = 1) (by stationary increments)
λ4 (2λ)0 −3λ (3λ)1
= e−λ e−2λ e
4! 0! 1!
(d) We have
P(N1 = 2, N3 ≤ 4) = P(N1 = 2, N3 − N1 ≤ 2)
= P(N1 = 2)P(N3 − N1 ≤ 2) (by independent increments)
= P(N1 = 2)P(N2 ≤ 2) (by stationary increments)
= P(N1 = 2) P(N2 = 0) + P(N2 = 1) + P(N2 = 2)
2 0
(2λ)1 (2λ)2
−λ λ −2λ (2λ)
=e e + +
2! 0! 1! 2!
(e) We have
P(2N5 + N2 = 5) = P(N2 = 1, N5 = 2)
= P(N2 = 1, N5 − N2 = 1)
= P(N2 = 1)P(N5 − N2 = 1) (by independent increments)
= P(N2 = 1)P(N3 = 1) (by stationary increments)
(2λ)1 −3λ (3λ)1
= e−2λ e
1! 1!
11
(f) We have
P(N1 = 4, N6 = 5, N10 = 7)
P(N1 = 4, N6 = 5|N10 = 7) =
P(N10 = 7)
P(N1 = 4, N6 − N1 = 1, N10 − N6 = 2)
=
P(N10 = 7)
P(N1 = 4)P(N6 − N1 = 1)P(N10 − N6 = 2)
= (by independent incremen
P(N10 = 7)
P(N1 = 4)P(N5 = 1)P(N4 = 2)
= (by stationary increments)
P(N10 = 7)
4 1 2
e−λ λ4! e−5λ (5λ)
1!
e−4λ (4λ)
2!
= 7
e−10λ (10λ)
7!
4 1 2
7! λ 5λ 4λ
=
4! · 2! · 1! 10λ 10λ 10λ
Another way to solve this problem is by using the multinomial distribution where
there are a total of 7 arrivals (outcomes) by time 10. The arrivals are classified into
λ 5λ 4λ
three types, namely, arrivals in (0, 1], (1, 6]and(6, 10] with probabilities 10λ , 10λ , and 10λ
respectively. Hence we have,
7!
P(N(0,1] = 4, N(1,6] = 1, N(6,10] = 2, ) = ( λ )4 ( 10λ
4!·2!·1! 10λ
5λ 1 4λ 2
) ( 10λ )
(g) We have
P(N1 = 4, N7 = 8, N3 = 6)
P(N1 = 4, N7 = 8|N3 = 6) =
P(N3 = 6)
P(N1 = 4, N3 − N1 = 2, N7 − N3 = 2)
=
P(N3 = 6)
P(N1 = 4)P(N3 − N1 = 2)P(N7 − N3 = 2)
= (by independent increments
P(N3 = 6)
P(N1 = 4)P(N2 = 2)P(N4 = 2)
= (by stationary increments)
P(N3 = 6)
4 2 2
e−λ λ4! e−2λ (2λ)
2!
e−4λ (4λ)
2!
= 6
e−3λ (3λ)
6!
4 2
6! λ 2λ (4λ)2
= e−4λ
4! · 2! 3λ 3λ 2!
12
(i) We have
= 18λ2 + 16λ
(j) We have
(k) We have
Question 6. Let (Nt )t be a Poisson process with rate λ > 0 modelling the number of
arrivals of customers to a gift shop during the time interval [0, t].
(a) What is the expected time until the fifth customer arrives?
(b) What is the probability that the time passes between ninth and tenth arrivals exceeds
2.8?
(c) What is the probability that there is no arrival during the time interval (13.2, 17.8]?
(d) What is the probability that there are exactly two arrivals during the time interval
(13.2, 17.8]?
(e) Consider the time period I = (3, 4.5] ∪ (7.5, 10]. Let MI be the number of arrivals
during the time period I. Find the distribution of MI .
13
(f) Compute Cov(Nt , Ns ) where s < t.
Solution:
(a) The arrival time S5 of the fifth customer is Gamma(5, λ) since it is sum of five inter
arrival times, that is it is sum of five identically distributed independent Expon(λ)
variables. Therefore E[S5 ] = 5λ.
(b) We have the inter arrival time T10 = S10 − S9 is Expon(λ). Therefore P(T10 ≥ 2.8) =
e−2.8λ .
(c) There is no arrival during the time interval (13.2, 17.8] if and only if the number people
arrived during the time interval (0, 17.8] is equal to the the number people arrived
during the time interval (0, 13.2]. Hence we need to compute P(N17.8 = N13.2 ). We have
(d) There are exactly two arrivals during the time interval (13.2, 17.8] if and only if the
number people arrived during the time interval (0, 17.8] is larger by two than the
number people arrived during the time interval (0, 13.2]. Hence we need to compute
P(N17.8 = N13.2 + 2). We have
(e) We have, N4.5 − N3 is equal to the number of arrivals during the time interval (3, 4.5]
and N10 −N7.5 counts the number of arrivals during the time interval (7.5, 10]. Therefore
we have MI = (N4.5 − N3 ) + (N10 − N7.5 ). Let X = N4.5 − N3 and let Y = N10 − N7.5
so we have MI = X + Y. By independent increments property we have X and Y
are independent and by stationary increments, we have X ∼ P ois(1.5λ) and Y ∼
14
P ois(2.5λ). We compute probability mass function of X + Y :
k
X
P(X + Y = k) = P(X = j, Y = k − j)
j=0
k
X
= P(X = j)P(Y = k − j) (by independence of X and Y )
j=0
k
X (1.5λ)j −2.5λ (2.5λ)(k−j)
= e−1.5λ e (since marginal distributions are Poisson)
j=0
j! (k − j)!
k
1 X
−4λ k!
=e (1.5λ)j (2.5λ)(k−j)
k! j=0 j!(k − j)!
k
−4λ 1 k
X
=e (1.5λ)j (2.5λ)(k−j)
k! j=0 j
1
= e−4λ (1.5λ + 2.5λ)k (by binomial theorem)
k!
e−4λ (4λ)k
=
k!
k
Therefore we have MI ∼ P ois(4λ) and P(MI = k) = e−4λ (4λ)
k!
for k = 0, 1, · · · .
Another way : By stationary increments (N10 − N7.5 ) has the same distribution as
(N7 − N4.5 ). Hence MI = (N4.5 − N3 ) + (N10 − N7.5 ) has the same distribution as
(N4.5 − N3 ) + (N7 − N4.5 ) = N7 − N3 . (Using the fact that A, B and C random
variables such that A and B have the same distribution and A and C are independent
and B and C are independent, then A + C and B + C have the same distribution.)
And N7 − N3 has the same distribution as N4 ∼ P ois(4λ). Therefore MI ∼ P ois(4λ)
k
and P(MI = k) = e−4λ (4λ)
k!
for k = 0, 1, · · · .
Note that one could also use superposition theorem and get the conclusion that MI ∼
P ois(4λ).
15
(f) We have
Cov(Nt , Ns ) = E (Nt − E[Nt ])(Ns − E[Ns ])
= E (Nt − λt)(Ns − λs)
= E Nt Ns − Nt λs − λtNS + λtλs
= E[Nt Ns ] − E[Nt λs] − E[λtNs ] + E[λtλs]
= E[Nt Ns ] − λtλs − λtλs + λtλs
= E[Nt Ns ] − λtλs
= E[(Nt − Ns + Ns )Ns ] − λtλs
= E[(Nt − Ns )Ns ] + E[Ns2 ] − λtλs
= E[(Nt − Ns )]E[Ns ] + V ar(Ns ) + E[Ns ]2 − λtλs (by independent increments)
= E[Nt−s ]E[Ns ] + λs + (λs)2 − λtλs (by stationary increments)
= λ(t − s)λs + λs + (λs)2 − λtλs
= λtλs − (λs)2 + λs + (λs)2 − λtλs
= λs.
16