0% found this document useful (0 votes)
770 views

Homework 4 Solutions

This document contains solutions to homework questions about modeling stochastic processes as Markov chains. Question 1 involves a professor giving exams that can be of 3 types, with different probabilities of students doing well on each type. The proportion of each exam type is calculated as the limiting distribution of a Markov chain model. Question 2 involves balls being transferred between two urns randomly, and models this as a Markov chain to find the limiting distribution. Question 3 involves a machine that can be in one of four conditions, with probabilities of changing conditions each day. Part c calculates the expected daily cost as the weighted average of costs in each condition based on the limiting distribution.

Uploaded by

AE E
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
770 views

Homework 4 Solutions

This document contains solutions to homework questions about modeling stochastic processes as Markov chains. Question 1 involves a professor giving exams that can be of 3 types, with different probabilities of students doing well on each type. The proportion of each exam type is calculated as the limiting distribution of a Markov chain model. Question 2 involves balls being transferred between two urns randomly, and models this as a Markov chain to find the limiting distribution. Question 3 involves a machine that can be in one of four conditions, with probabilities of changing conditions each day. Part c calculates the expected daily cost as the weighted average of costs in each condition based on the limiting distribution.

Uploaded by

AE E
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

IE 325: Stochastic Models

Homework Assignment 4
Solutions
Spring 2020

Question 1. A professor continually gives exams to her students. She can give three possi-
ble types of exams, and her class is graded as either having done well or badly. Let pi denote
the probability that the class does well on type i exam, and suppose that p1 = 0.3, p2 = 0.6,
and p3 = 0.9. If the class does well on an exam, then the next exam is equally likely to be
any of the three types. If the class does badly, then the next exam is always type 1. What
proportion of the exams are of type i, i = 1, 2, 3?

Solution of Question 1. Let Xn denote the type of exam in period n. Then (Xn )n≥0 is a
Markov Chain on state space E = {1, 2, 3}. The one-step transition probability matrix is:

1 2 3
 
1 0.8 0.1 0.1
P = 2 0.6 0.2 0.2 
3 0.4 0.3 0.3

Note that the Markov chain is ergodic. The limiting probability distribution π = (π1 , π2 , π3 )
is the solution of:

π1 = 0.8π1 + 0.6π2 + 0.4π3


π2 = 0.1π1 + 0.2π2 + 0.3π3
π3 = 0.1π1 + 0.2π2 + 0.3π3
π 1 + π2 + π3 = 1

Solving this we obtain π = ( 75 , 17 , 17 )

Question 2. Two urns A and B contain a total number of k balls. In each step, urn B is
chosen with probability p (0 < p < 12 ), and urn A is chosen with probability 1 − p. Then a
ball is selected from the chosen urn and placed in the other urn. If urn A becomes empty
we transfer 0 balls from urn A to urn B with probability 1 − p, and 1 ball from urn B to A
with probability p.
(a) Model this process as a Markov chain.

(b) Determine the one-step transition probability matrix.

1
(c) Does the Markov chain have a limiting distribution? If so, find the limiting distribution.

Solution of Question 2. (a) Let Xn be the number of balls in urn A at time n, the number
of balls in urn A is only dependent to the previous step, therefore it is Markov Chain process.
And the state transition diagram for this MC will be:

(b)
0 1 2 3 ... k − 1 k
4
0 1−p p 0 0 0
... 0 0
 
1 1 − p p 0 0 0
... 0 0
1−p
 
2 0 0 p 0
... 0 0
P = 3 0 0 1−p 0 p
... 0 0

 . .. .. .. ..
.. .. .. 
 .. . . . .. . .
k 0 0 0 0 0 ... p − 1 p
(c) This chain is irreducible since all states communicate with each other. It is also ape-
riodic since it includes a self-transition, P00 > 0. Let’s write the equations for a station-
ary distribution. For state 0, we can write π0 = (1 − p)π0 + (1 − p)π1 , which results in
p
π1 = 1−p π0 . For state 1, we can write π1 = pπ0 + (1 − p)π2 = (1 − p)π1 + (1 − p)π2 ,
p
which results in π2 = 1−p π1 . Similarly, for any j ∈ 1, 2, . . . , k, we obtain πj = απj−1 ,
p
where α = 1−p . Note that since 0 < p < 1/2, we conclude that 0 < α < 1. We obtain
Pk Pk
πj = αj π0 , for j = 1, 2, . . . , k. Finally, we must have 1 = j=0 πj =
j
j=0 α π0 (where
k+1
0 < α < 1) = 1−α 1−α
π0 (geometric series). Thus, π0 = 1−α 1−α
k+1 . Therefore, the stationary
1−α j
distribution is given by πj = 1−αk+1 α , for j = 0, 1, 2, . . . , k. Since this chain is irreducible
and aperiodic and we have found a stationary distribution, we conclude that all states are
positive recurrent and π = [π0 , π1 , . . .] is the limiting distribution.

Question 3. A machine, at any given day, can be in one of four different conditions: perfect,
good, average and critical.
• If the machine is in perfect condition, it stays in perfect condition the next day with
probability 0.7 and deteriorates into good condition with probability 0.3.

• If the machine is in good condition, it stays in good condition the next day with
probability 0.7 and deteriorates into average condition with probability 0.2. It breaks
down with probability 0.1.

• If the machine is in average condition, it stays in average condition the next day with
probability 0.7 and deteriorates into critical condition with probability 0.1. It breaks
down with probability 0.2.

• If the machine is in critical condition, it stays in critical condition the next day with
probability 0.6 and breaks down with probability 0.4.

2
Condition Cost
Perfect $10
Good $15
Average $20
Critical $50

If the machine is not in perfect condition then some defective items are produced. The
associated costs per day for each condition is given as follows: When the machine breaks
down, it is immediately replaced. Replacement of the machine costs $200.

(a) Model this process as a Markov chain.

(b) Show that the limiting probabilities exist. Find the limiting probabilities.

(c) Find the expected cost per day.

(d) Now consider a replacement policy where we replace the machine once it reaches critical
condition. Assume that in this case, if the machine breaks down, we pay a penalty
of $50 in addition to the replacement cost. Calculate the expected cost of this policy.
Compare it with your result in (c).

Solution of Question 3. (a) Let the Markov chain be defined as follows:




 0, the machine is in perfect condition.

1, the machine is in good condition.
(Xn )n≥0 =


 2, the machine is in average condition.

3, the machine is in critical condition.

The set of states is E = {0, 1, 2, 3}. With the information given, following state
transition diagram is obtained: Transition probability matrix:

3
0 1 2 3
 
0 0.7 0.3 0 0
1 0.1 0.7 0.2 0 
P =  
2 0.2 0 0.7 0.1 
3 0.4 0 0 0.6

(b) As observed from the state transition diagram, the Markov chain consists a single
communicating class of states. Thus it is irreducible.
For each state i, once the system leaves the state i, it is certain that the system will
visit state i after a finite units of time. Hence we have positive recurrence.
For each state i, once the system leaves the state i, it is probable to return to that
state in one unit of time (P(i,i) > 0). This information is sufficient to conclude that
the Markov chain is aperiodic.
A Markov chain which is irreducible, positive recurrent and aperiodic is known to
have a unique limiting probability distribution. Limiting probability distribution π =
[π0 , π1 , π2 , π3 ] is found by solving the following system of equations:

πP = π
X
πi = 1
i∈E

which can be explicitly written as follows:


0.7π0 + 0.1π1 + 0.2π2 + 0.4π3 = π1
0.3π0 + 0.7π1 + 0π2 + 0π3 = π2
0π0 + 0.2π1 + 0.7π2 + 0π3 = π3
0π0 + 0π1 + 0.1π2 + 0.6π3 = π4
π 0 + π1 + π2 + π 3 = 1

Solving the system of equation gives us π = [0.3529, 0.3529, 0.2353, 0.0588].


(c) The system, in the long run, is at state i with probability πi . Every time the system
hits state 0 we incur a replacement cost in addition to maintenance cost. Otherwise
only maintenance cost is incurred. Let M be the r.v. indicating daily cost. Then:
E[M ] = 10π0 + 15π1 + 20π2 + 50π3 + 200(0.1π1 + 0.2π2 + 0.4π3 )
| {z } | {z }
expected maintenance cost per day expected replacement cost per day
= $37.65

(d) The Markov chain corresponding to this system is defined as follows:




 0, the machine is in perfect condition, replaced without failure

0f, the machine is in perfect condition, replaced after failure
(X̃)n≥0 =


 1, the machine is in good condition.

2, the machine is in average condition.

4
The set of states is Ẽ = {0, 0f, 1, 2}. State transition diagram:

Transition probability matrix:

0 0f 1 2
 
0 0.7 0 0.3 0
0f
 0 0.7 0.3 0 

P =
1 0 0.1 0.7 0.2 

2 0.1 0.2 0 0.7

This Markov chain is also irreducible, aperiodic and positiveP


recurrent, therefore it has
a limiting probability distribution. Solving πP = π and i∈Ẽ πi = 1 together, we
obtain π = [0.0833, 0.2917, 0.3750, 0.2500].
Let M̃ be the r.v. indicating the daily cost incurred in this system. Then:

E[M̃ ] = 10π0 + 10π0f + 15π1 + 20π2 + 250(0.1π1 + 0.2π2 )


| {z } | {z }
expected maintenance cost per day expected replacement and penalty cost per day
+ 200(0.1π )
| {z 2}
expected replacement cost per day (without failure)

= $41.25

Since E[M̃ ] > E[M ], the previous policy incurs a lower average daily cost.

Question 4. Consider the Markov chains whose transition probability matrices are given
below. For each Markov chain, classify its states and determine if the Markov chain is
ergodic. What can you say about the limit behavior of each Markov chain?

5
(a)
0 1 2
 
0 0 0.5 0.5
Pa = 1 0.5 0 0.5 
2 0.5 0.5 0
Solution:

• C = {0, 1, 2} (recurrent)

– Pa is irreducible since all the states communicate, it has a single communicating


class.
– All of the states are recurrent and since Pa is a finite state Markov Chain, all
recurrent states are positive recurrent.
– Pa is aperiodic.

• Therefore Pa is ergodic.
• To find the limiting distribution, one needs to solve Π = Π · Pa and Π · 1 = 1.
 
0 0.5 0.5
    
Π1 Π2 Π3 = Π1 Π2 Π3 · 0.5 0 0.5

0.5 0.5 0
1 1
Π1 = Π2 + Π3
2 2
1 1
Π2 = Π1 + Π2
2 2
1 1
Π3 = Π1 + Π2
2 2
1 = Π 1 + Π2 + Π 3
1 1 1
Π1 = , Π2 = , Π3 = .
3 3 3

(b)
0 1 2 3
 
0 0 0 0 1
1 0
 0 0 1
Pb =  
2 0.5 0.5 0 0
3 0 0 1 0

6
Solution:

• C = {0, 1, 2, 3} (recurrent)
– Pb is irreducible since all the states communicate, it has a single communicating
class.
– All of the states are recurrent and since Pb is a finite state Markov Chain, all
recurrent states are positive recurrent.
– Pb is periodic with period 3.
• Therefore Pb is not ergodic.
• Limiting distribution does not exist.
(c)
0 1 2 3 4
 
0 0.5 0 0.5 0 0
1 0.25 0.5 0.25 0 0 
Pc = 2 
 0.5 0 0.5 0 0 
3 0 0 0 0.5 0.5 
4 0 0 0 0.5 0.5
Solution:

• C1 = {0, 2} (recurrent)
• C2 = {1} (transient)
• C3 = {3, 4} (recurrent)
– Pc is reducible since all the states do not communicate, it has 3 communicating
classes.

7
– Since Pc is a finite state Markov Chain, all recurrent states are positive recurrent.
– Pc is aperiodic.

• Therefore Pc is not ergodic.


• Limiting distribution does not exist. However, we can find the limiting behaviour
depending on the initial state.

– Let πij = lim P{Xn = j|X0 = i}


n→∞
– Since state 1 is transient, then πi1 = 0 ∀i = 0, · · · , 4.
– Since C1 and C3 are recurrent, then π03 = π04 = π23 = π24 = π30 = π32 = π40 =
π42 = 0.
– By handling C1 and C3 as two separate ergodic Markov chains, compute the
limiting probabilities of each class. The results are:
π00 = π02 = π20 = π22 = 21 .
π33 = π34 = π43 = π44 = 12 .
– In this part, we find the probability of absorption into C1 or C3 from the transient
state 1.
Let T = min{n ≥ 0|Xn ∈ C1 or Xn ∈ C3 }.
Note that P{XT ∈ C1 |X0 = 1} + P{XT ∈ C3 |X0 = 1} = 1.
Let νi = P{XT ∈ C1 |X0 = i}, then
νC1 = 1, νC3 = 0, ν1 = 0.5ν1 + 0.5νC1
ν1 = 1.
Starting from state 1, the Markov chain will end up in C1 and C3 with probabilities
1 and 0 respectively.
Hence π10 = 1 · π00 = 21
π12 = 1 · π00 = 21
π13 = 0
π14 = 0
Hence we have,

0 1 2 3 4
1 1

0 2 0 2
0 0
1 1
1
 21 0 2
0 0
1
π = 2
2 0 2
0 0
1 1
3 0 0 0 2 2
1 1
4 0 0 0 2 2

8
(d)
0 1 2 3 4
 
0 0.25 0.75 0 0 0
1 0.5 0.5 0 0 0
Pd = 2 
 0 0 1 0 0
1 2
3  0 0 3 3
0
4 1 0 0 0 0
Solution:

• C1 = {0, 1} (recurrent)
• C2 = {2} (recurrent)
• C3 = {3} (transient)
• C4 = {4} (transient)

– Pd is reducible since all the states do not communicate, it has 4 communicating


classes.
– Since Pc is a finite state Markov Chain, all recurrent states are positive recurrent.
– Pd is aperiodic.

• Therefore Pd is not ergodic.


• Limiting distribution does not exist. However, we can find the limiting behaviour
depending on the initial state.

– Let πij = lim P{Xn = j|X0 = i}


n→∞
– Since states 3 and 4 is transient, then πi3 = πi4 = 0 ∀i = 0, · · · , 4.
– Since C1 and C2 are recurrent, then π02 = π12 = π20 = π21 = 0.
– Since C2 is absorbing, then π22 = 1.
– By handling C1 as a single ergodic Markov chain, compute its limiting probabili-
ties. The results are:
π00 = π = 10 = 25 .
π01 = π11 = 35 .
– In this part, we find the probability of absorption into C1 or C2 from the transient
states 3 and 4.

9
Let T = min{n ≥ 0|Xn ∈ C1 or Xn ∈ C2 }.
Note that P{XT ∈ C1 |X0 = 3} + P{XT ∈ C2 |X0 = 3} = 1.
P{XT ∈ C1 |X0 = 4} + P{XT ∈ C2 |X0 = 4} = 1.
Let νi = P{XT ∈ C2 |X0 = i}, then
νC1 = 0, νC2 = 1, ν4 = 0, ν3 = 23 ν3 + 31 νC2
ν3 = 1.
Starting from state 3, the Markov chain will end up in C1 and C2 with probabilities
0 and 1 respectively.
Starting from state 4, the Markov chain will end up in C1 and C2 with probabilities
1 and 0 respectively.
Hence π30 = π31 = 0
π40 = π00 = 25
π41 = π11 = 53
Hence we have,

0 1 2 3 4
2 3

0 5 5
0 0 0
2 3
1
5 5
0 0 0
π = 2
0 0 1 0 0
3 0 0 1 0 0
4 52 3
5
0 0 0

Question 5. Let (Nt )t be a Poisson process with rate λ > 0 and let Sn be the n’th arrival
time.

(a) Compute E[S4 ].

(b) Compute E[N4 − N2 |N1 = 3].

(c) Compute P(N1 = 4, N3 = 4, N6 = 5).

(d) Compute P(N1 = 2, N3 ≤ 4).

(e) Compute P(2N5 + N2 = 5).

(f) Compute P(N1 = 4, N6 = 5|N10 = 7).

(g) Compute P(N1 = 4, N7 = 8|N3 = 6).

(h) Compute P(N2 = 4, N7 = 5|N6 = 3).

(i) Compute E[2N32 − 4N5 + 3N10 ].

(j) Compute V ar(5N3 − 3N5 ).

(k) Compute V ar(N2 − 2N3 + 3N5 ).

10
Solution:

(a) Since S4 ∼ Gamma(4, λ), we have E[S4 ] = λ4 .

(b) We have

E[N4 − N2 |N1 = 3] = E[N4 − N2 ] (by independent increments)


= E[N4 ] − E[N2 ] = 4λ − 2λ = 2λ.

(c) We have

P(N1 = 4, N3 = 4, N6 = 5) = P(N1 = 4, N3 − N1 = 0, N6 − N3 = 1)
= P(N1 = 4)P(N3 − N1 = 0)P(N6 − N3 = 1) (by independent incremen
= P(N1 = 4)P(N2 = 0)P(N3 = 1) (by stationary increments)
λ4 (2λ)0 −3λ (3λ)1
= e−λ e−2λ e
4! 0! 1!

(d) We have

P(N1 = 2, N3 ≤ 4) = P(N1 = 2, N3 − N1 ≤ 2)
= P(N1 = 2)P(N3 − N1 ≤ 2) (by independent increments)
= P(N1 = 2)P(N2 ≤ 2) (by stationary increments)

= P(N1 = 2) P(N2 = 0) + P(N2 = 1) + P(N2 = 2)
2 0
(2λ)1 (2λ)2
 
−λ λ −2λ (2λ)
=e e + +
2! 0! 1! 2!

(e) We have

P(2N5 + N2 = 5) = P(N2 = 1, N5 = 2)
= P(N2 = 1, N5 − N2 = 1)
= P(N2 = 1)P(N5 − N2 = 1) (by independent increments)
= P(N2 = 1)P(N3 = 1) (by stationary increments)
(2λ)1 −3λ (3λ)1
= e−2λ e
1! 1!

11
(f) We have

P(N1 = 4, N6 = 5, N10 = 7)
P(N1 = 4, N6 = 5|N10 = 7) =
P(N10 = 7)
P(N1 = 4, N6 − N1 = 1, N10 − N6 = 2)
=
P(N10 = 7)
P(N1 = 4)P(N6 − N1 = 1)P(N10 − N6 = 2)
= (by independent incremen
P(N10 = 7)
P(N1 = 4)P(N5 = 1)P(N4 = 2)
= (by stationary increments)
P(N10 = 7)
4 1 2
e−λ λ4! e−5λ (5λ)
1!
e−4λ (4λ)
2!
= 7
e−10λ (10λ)
7!
 4  1  2
7! λ 5λ 4λ
=
4! · 2! · 1! 10λ 10λ 10λ

Another way to solve this problem is by using the multinomial distribution where
there are a total of 7 arrivals (outcomes) by time 10. The arrivals are classified into
λ 5λ 4λ
three types, namely, arrivals in (0, 1], (1, 6]and(6, 10] with probabilities 10λ , 10λ , and 10λ
respectively. Hence we have,
7!
P(N(0,1] = 4, N(1,6] = 1, N(6,10] = 2, ) = ( λ )4 ( 10λ
4!·2!·1! 10λ
5λ 1 4λ 2
) ( 10λ )

(g) We have

P(N1 = 4, N7 = 8, N3 = 6)
P(N1 = 4, N7 = 8|N3 = 6) =
P(N3 = 6)
P(N1 = 4, N3 − N1 = 2, N7 − N3 = 2)
=
P(N3 = 6)
P(N1 = 4)P(N3 − N1 = 2)P(N7 − N3 = 2)
= (by independent increments
P(N3 = 6)
P(N1 = 4)P(N2 = 2)P(N4 = 2)
= (by stationary increments)
P(N3 = 6)
4 2 2
e−λ λ4! e−2λ (2λ)
2!
e−4λ (4λ)
2!
= 6
e−3λ (3λ)
6!
 4  2
6! λ 2λ (4λ)2
= e−4λ
4! · 2! 3λ 3λ 2!

(h) It is clear that P(N2 = 4, N7 = 5|N6 = 3) = 0.

12
(i) We have

E[2N32 − 4N5 + 3N10 ] = 2E[N32 ] − 4E[N5 ] + 3E[N10 ]


= 2 E[N3 ]2 + V ar(N3 ) − 4E[N5 ] + 3E[N10 ]


= 2 9λ2 + 3λ − 20λ + 30λ




= 18λ2 + 16λ

(j) We have

V ar(5N3 − 3N5 ) = V ar(3N5 − 5N3 )


= V ar(3(N5 − N3 ) − 2N3 )
= V ar(3(N5 − N3 )) + V ar(−2N3 ) (by independent increments)
= 9V ar(N5 − N3 ) + 4V ar(N3 )
= 9V ar(N2 ) + 4V ar(N3 ) (by stationary increments)
= 18λ + 12λ = 30λ

(k) We have

V ar(N2 − 2N3 + 3N5 ) = V ar(3N5 − 2N3 + N2 )


= V ar(3(N5 − N3 ) + (N3 − N2 ) + 2N2 )
= V ar(3(N5 − N3 )) + V ar(N3 − N2 ) + V ar(2N2 ) (by independent increment
= 9V ar(N5 − N3 ) + V ar(N3 − N2 ) + 4V ar(N2 )
= 9V ar(N2 ) + V ar(N1 ) + 4V ar(N2 ) (by stationary increments)
= 18λ + λ + 8λ = 27λ

Question 6. Let (Nt )t be a Poisson process with rate λ > 0 modelling the number of
arrivals of customers to a gift shop during the time interval [0, t].

(a) What is the expected time until the fifth customer arrives?

(b) What is the probability that the time passes between ninth and tenth arrivals exceeds
2.8?

(c) What is the probability that there is no arrival during the time interval (13.2, 17.8]?

(d) What is the probability that there are exactly two arrivals during the time interval
(13.2, 17.8]?

(e) Consider the time period I = (3, 4.5] ∪ (7.5, 10]. Let MI be the number of arrivals
during the time period I. Find the distribution of MI .

13
(f) Compute Cov(Nt , Ns ) where s < t.

Solution:

(a) The arrival time S5 of the fifth customer is Gamma(5, λ) since it is sum of five inter
arrival times, that is it is sum of five identically distributed independent Expon(λ)
variables. Therefore E[S5 ] = 5λ.

(b) We have the inter arrival time T10 = S10 − S9 is Expon(λ). Therefore P(T10 ≥ 2.8) =
e−2.8λ .

(c) There is no arrival during the time interval (13.2, 17.8] if and only if the number people
arrived during the time interval (0, 17.8] is equal to the the number people arrived
during the time interval (0, 13.2]. Hence we need to compute P(N17.8 = N13.2 ). We have

P(N17.8 = N13.2 ) = P(N17.8 − N13.2 = 0)


= P(N17.8−13.2 = 0) (by stationary increments)
= P(N5.6 = 0) = e−5.6λ .

(d) There are exactly two arrivals during the time interval (13.2, 17.8] if and only if the
number people arrived during the time interval (0, 17.8] is larger by two than the
number people arrived during the time interval (0, 13.2]. Hence we need to compute
P(N17.8 = N13.2 + 2). We have

P(N17.8 = N13.2 + 2) = P(N17.8 − N13.2 = 2)


= P(N17.8−13.2 = 2) (by stationary increments)
(5.6λ)2
= P(N5.6 = 2) = e−5.6λ .
2!

(e) We have, N4.5 − N3 is equal to the number of arrivals during the time interval (3, 4.5]
and N10 −N7.5 counts the number of arrivals during the time interval (7.5, 10]. Therefore
we have MI = (N4.5 − N3 ) + (N10 − N7.5 ). Let X = N4.5 − N3 and let Y = N10 − N7.5
so we have MI = X + Y. By independent increments property we have X and Y
are independent and by stationary increments, we have X ∼ P ois(1.5λ) and Y ∼

14
P ois(2.5λ). We compute probability mass function of X + Y :
k
X
P(X + Y = k) = P(X = j, Y = k − j)
j=0
k
X
= P(X = j)P(Y = k − j) (by independence of X and Y )
j=0
k
X (1.5λ)j −2.5λ (2.5λ)(k−j)
= e−1.5λ e (since marginal distributions are Poisson)
j=0
j! (k − j)!
k
1 X
−4λ k!
=e (1.5λ)j (2.5λ)(k−j)
k! j=0 j!(k − j)!
k  
−4λ 1 k
X
=e (1.5λ)j (2.5λ)(k−j)
k! j=0 j
1
= e−4λ (1.5λ + 2.5λ)k (by binomial theorem)
k!
e−4λ (4λ)k
=
k!

k
Therefore we have MI ∼ P ois(4λ) and P(MI = k) = e−4λ (4λ)
k!
for k = 0, 1, · · · .
Another way : By stationary increments (N10 − N7.5 ) has the same distribution as
(N7 − N4.5 ). Hence MI = (N4.5 − N3 ) + (N10 − N7.5 ) has the same distribution as
(N4.5 − N3 ) + (N7 − N4.5 ) = N7 − N3 . (Using the fact that A, B and C random
variables such that A and B have the same distribution and A and C are independent
and B and C are independent, then A + C and B + C have the same distribution.)
And N7 − N3 has the same distribution as N4 ∼ P ois(4λ). Therefore MI ∼ P ois(4λ)
k
and P(MI = k) = e−4λ (4λ)
k!
for k = 0, 1, · · · .
Note that one could also use superposition theorem and get the conclusion that MI ∼
P ois(4λ).

15
(f) We have
 
Cov(Nt , Ns ) = E (Nt − E[Nt ])(Ns − E[Ns ])
 
= E (Nt − λt)(Ns − λs)
 
= E Nt Ns − Nt λs − λtNS + λtλs
= E[Nt Ns ] − E[Nt λs] − E[λtNs ] + E[λtλs]
= E[Nt Ns ] − λtλs − λtλs + λtλs
= E[Nt Ns ] − λtλs
= E[(Nt − Ns + Ns )Ns ] − λtλs
= E[(Nt − Ns )Ns ] + E[Ns2 ] − λtλs
= E[(Nt − Ns )]E[Ns ] + V ar(Ns ) + E[Ns ]2 − λtλs (by independent increments)
= E[Nt−s ]E[Ns ] + λs + (λs)2 − λtλs (by stationary increments)
= λ(t − s)λs + λs + (λs)2 − λtλs
= λtλs − (λs)2 + λs + (λs)2 − λtλs
= λs.

16

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy