IME625A

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

C12.

HW
Monday, February 14, 2022 9:43 PM

Consider the Markov Chain shown to the right.


Decompose its states into communicating blocks and
identify nature of the states. Think of a generic method
for decomposing a Markov chain into communicating
blocks. Is it possible to decompose a Markov chain in
two different manner?

1 2 3 4 5 6 7
1 1/3 2/3 0 0 0 0 0
2 1/4 3/4 0 0 0 0 0
3 0 0 1/3 0 2/3 0 0
4 0 0 1 0 0 0 0
5 0 0 1 0 0 0 0
6 1/6 0 1/6 1/6 0 1/4 1/4
7 0 0 0 0 0 0 1

C12 Page 1
C12.P1
Tuesday, February 15, 2022 12:52 PM

Decomposition and the nature of states: We say that is recurrent if


or , otherwise is transient.
Alternately, if and is recurrent (or transient), then is recurrent (or
transient). We don’t need to do any of these if is in a finite
communicating block or in the set of remaining states. In the first case, is
recurrent, and in the second case, is transient. If is in an infinite
communicating block, then we cannot comment on the nature of ; it can
go either way.

States not in any communicating block are transient

Consider , where are the


communicating blocks of . Let denote the
set of states that can be reached from . If , then it is impossible to
return to , and thus, is transient as claimed. If , then there is at
least one more member in ; otherwise, is an absorbing state and
is a communicating block, which is impossible. If ( ) is in and
, then there is a path of no-return to (via ), which makes transient
as claimed. Finally, if all lead to , then is a
communicating block, which is impossible. This establishes the claim.

The last part of the above argument relies on the following: Let
denote the set of states that can be reached from . If all lead
to , then is a communicating block. Verify validity of this assertion
yourself.

States in finite communicating blocks are recurrent

Consider a finite communicating block . If the Markov chain


reaches a state in , it can never leave. Then for all ,
which is the only requirement of a square matrix of non-negative entries to
be the transition probability matrix of a Markov chain. So, itself can be
regarded as a Markov chain. In this chain, all pairs of states communicate.
Then all states are either recurrent or transient. The claim is that the
former is true.

Let denote the state through which the Markov chain enters .
Then for the Markov chain . Let denote an arbitrary state in .
Then the claim is that is recurrent. Assuming otherwise, let be
transient. It can be proved that if is transient, then
for all . We have shown this for . A proof for the general case
appears in Appendix A. Since the series is convergent, its
limiting term must be zero, i.e., . Then
.

Now, row sums of every -step transition probability matrix is , i.e.,


for all . Taking limit as , we get
. Since is finite, we can alter the order of limit
and sum, and obtain . This is in contradiction with
the above observation that . Therefore, our
assumption that is transient must be incorrect. Hence, (and the whole
) is recurrent.

If is infinite, then the change in the order of limit and sum is not
permitted (even with the convergence theorems), and we do not reach the
contradiction that . In fact, infinite communicating blocks can be of
either type, i.e., all states are either recurrent or transient. We provide
examples of both kinds next.

C12 Page 2
contradiction that . In fact, infinite communicating blocks can be of
either type, i.e., all states are either recurrent or transient. We provide
examples of both kinds next.

C12 Page 3
C12.P2
Tuesday, February 15, 2022 12:53 PM

Two examples of infinite communicating block

Let us consider random walk on integers. Here, and


and for all . For any pair of states ,
, , and . Thus, for all
. So, the whole is a communicating block and it’s infinite. We
show that all states are recurrent if , else all states are transient.

Since all pairs of states communicate, showing that a state is


recurrent/transient is sufficient to show that the whole block is
recurrent/transient. We show that if , else it’s finite.
; the odd and even
cases are written separately. Starting with state 0, the random walk cannot
visit state 0 in odd number of steps, i.e., for all . For even
number of steps, it is possible if and only if there are equal number of right
and left movements. So, for all . Then

If , then and we have

The above infinite series diverges. Hence, , as expected.

is a strictly concave function with maxima


and maximum value . Therefore, if , . Also,
for all . Then

In this case, with time, the Markov chain may drift towards if
or towards if , and never return to again.

Decomposition and long-run behavior: Given a non-absorbing Markov


chain, first, we shall decompose it into communicating blocks
where and the set of remaining states
. We know that all states in are transient, and if ,
for , is finite, then all states in are recurrent. If some is
infinite, then we need to check the nature of any one of the states in to
know the nature of all states in .

C12 Page 4
for , is finite, then all states in are recurrent. If some is
infinite, then we need to check the nature of any one of the states in to
know the nature of all states in .

Decomposition of states, apart from identifying nature of states, helps in


studying long-run behavior of Markov chains. If the chain reaches a
communicating block , then it can never leave, i.e., for all
. This makes every communicating block a Markov chain. Every
pair of states in these Markov chains communicate. Such chains are
known as irreducible Markov chain. If we have methods for studying
long-run behavior of irreducible Markov chains, then for a general
Markov chain, we just need to “combine” the long-run behaviors of its
communicating blocks. States in , being transient, do not feature in the
long-run behavior of the chain (to be shown later). We will discuss the
process of combining long-run behaviors of communicating blocks in the
end, after we have developed methods for studying long-run behavior of
irreducible Markov chains.

C12 Page 5
C12.P3
Tuesday, February 15, 2022 12:54 PM

Long-run fractions: Let us consider an irreducible Markov chain whose


states are recurrent. Such chains are called recurrent Markov chain. We
calculate long-run fraction of time spent in different states by such a chain.
We will consider irreducible Markov chain with transient states later.
Assuming an arbitrary starting state , we want to determine

Let denote the fraction of visits to


state- during the first transitions of the chain after starting in state- .
Then . Note that is a random variable, which we
hope to converge to a constant as . Using
and , we can capture visits to
state- by the chain as depicted in the diagram below.

First visit to happens after number of transitions, second visit happens


after another transitions, third visit happens after another
transitions, and so on. This observation is true even if . Due to Markov
property, is independent of , which themselves are iid
random variables. Due to recurrence, are finite, as in,
these can not be infinity with positive probability. Same is true for ,
because is visited again and again due to recurrence and .

Since are finite, then the infinite time horizon in the


expression of , i.e., , can be captured by with
. The number of visits to during this time is . Then the long-
run fraction can be rewritten as

By the law of large numbers, , provided that the


expectation exists, i.e., . Note that can be finite, as in
, but its expectation may not exist. In such cases,
. Therefore,

The above result is intuitive. On an average the chain returns to state after
C12 Page 6
The above result is intuitive. On an average the chain returns to state after
every number of transitions. Then the long-run fraction of visits to
should be , which becomes zero if . It shall be noted
that does not depend on (the starting state) for recurrent Markov
chains, which is one kind of irreducible Markov chain. We will see that the
same is true for the other kind of irreducible Markov chain, i.e., transient
chain, as well. Thus, for the irreducible Markov chains, is the same for
all and is denoted by . This observation is not true for general Markov
chains.

Let us consider the Markov chain to the right. is a finite


communicating block, and hence, the block itself is a recurrent Markov
chain. for and for . Then

. Then . Similarly,
one can verify that .

C12 Page 7

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy