0% found this document useful (0 votes)
118 views22 pages

Classification of States: (A) Absorbing State

This document summarizes different types of states that can occur in Markov chains: 1) Absorbing states are characterized by transition probabilities of 1 on the diagonal and 0 elsewhere in that row, meaning once entered there is no escape. 2) Periodic states have a probability of return to that state only at certain time intervals (the period). 3) Persistent states have a probability of 1 of eventually returning, meaning a return is certain. The mean recurrence time can be finite or infinite. 4) Examples are provided to illustrate these state types, including showing a state is persistent by calculating the probability of eventual return equals 1.

Uploaded by

Harsh Upadhayay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
118 views22 pages

Classification of States: (A) Absorbing State

This document summarizes different types of states that can occur in Markov chains: 1) Absorbing states are characterized by transition probabilities of 1 on the diagonal and 0 elsewhere in that row, meaning once entered there is no escape. 2) Periodic states have a probability of return to that state only at certain time intervals (the period). 3) Persistent states have a probability of 1 of eventually returning, meaning a return is certain. The mean recurrence time can be finite or infinite. 4) Examples are provided to illustrate these state types, including showing a state is persistent by calculating the probability of eventual return equals 1.

Uploaded by

Harsh Upadhayay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Lecture:9 Classification of States

Peter W.Jones and Peter Smith- Stochastic Processes An Introduction, Second Edition, CRC Press

Let us return to the general 𝑚 −state chain with states 𝑋1 , 𝑋2 , … . , 𝑋𝑚 and the
transition matrix
𝑃 = 𝑝𝑖𝑗 , 1 ≤ 𝑖, 𝑗 ≤ 𝑚 .
For a homogeneous chain, recollect that 𝑝𝑖𝑗 is the probability that a transition
occurs from state 𝑋𝑖 to state 𝑋𝑗 at any step or change of state in the chain. We
intend to investigate and classify some of the more common types of states which
can occur in Markov chains.

(a) Absorbing state


Once entered there is no escape from an absorbing state. An absorbing state 𝑋𝑖 is
characterized by the probabilities
𝑝𝑖𝑖 = 1, 𝑝𝑖𝑗 = 0, , (𝑖 ≠ 𝑗, 𝑗 = 1,2, … , 𝑚)
in the 𝑖 −th row of 𝑃. Therefore absorbing states are recognizable in Markov chain
by a value 1 in a diagonal element of the transition matrix. Since such matrices are
stochastic , then all other elements in the same row must be zero. This means that
once entered , there is no escape from absorbing state. For example, in the Markov
chain with
1ൗ 1ൗ 1ൗ
2 4 4
𝑃= 0 1 0 ,
1ൗ 1ൗ 1ൗ
4 4 2
then the state 𝑋2 is absorbing. This is also illustrated in the directed graph of
Fig.1.

Fig.1 Transition diagram for the above transition matrix.


(b) Periodic state
The probability of a return to 𝑋𝑖 at step 𝑛 is 𝑝𝑖𝑖 (𝑛) . Let 𝑡 be an integer greater
than 1. Suppose that
𝑝𝑖𝑖 (𝑛) = 0 for 𝑛 ≠ 𝑡, 2𝑡, 3𝑡, … . .
𝑝𝑖𝑖 (𝑛) for 𝑛 = 𝑡, 2𝑡, 3𝑡, … . .
In this case the state 𝑋𝑖 is said to be periodic with period 𝒕. If , for a state , no
such 𝒕 exists with this property, then the state is described as aperiodic.

Let
𝒅 𝒊 = 𝒈𝒄𝒅 𝒏|𝒑𝒊𝒊 𝒏 > 𝟎
That is, the greatest common divisor (gcd) of the set of integer 𝒏 for which
𝒑𝒊𝒊 (𝒏) >0. Then the state 𝑿𝒊 is said to be periodic if 𝒅 𝒊 > 𝟏 and aperiodic if
𝒅 𝒊 = 𝟏.

# A four state Markov chain has the transition matrix


0 1ൗ2 0 1ൗ2
𝑃= 0 0 1 0
1 0 0 0
0 0 1 0
Show that all states have period 3.

Sol.: The transition diagram shown below (Fig.2) suggests that all states are period
3. For example , if the chain starts at 𝑋1, then returns to 𝑋1 are only possible at
steps 3,6,9,……either through 𝑋2 𝑜𝑟 𝑋3.

Fig.2 The transition diagram

The analysis of chains with periodic states can be complicated. However, one can
check for a suspected periodicity as follows. By direct computation
1 0 0 0
0 1ൗ2 0 1ൗ2
𝑃3 = S =
0 0 1 0
0 1ൗ2 0 1ൗ2
In this example,
𝑆 2 = 𝑃6 = 𝑆𝑆 = 𝑆,
so that
𝑆 𝑟 = 𝑃3𝑟 = 𝑆, 𝑟 = 1,2,3, …
which always has nonzero elements on its diagonal. On the other hand
0 1ൗ2 0 1ൗ2 0 0 1 0
1 0 0 0
𝑆 𝑟+1 = 𝑆 𝑟 𝑆 = 0 0 1 0 , 𝑆 𝑟+2 = 𝑆 𝑟 𝑆 2 =
1 0 0 0 0 1ൗ2 0 1ൗ2
0 0 1 0 1 0 0 0
And both these matrices have zero diagonal elements for 𝑟 = 1,2,3, … .Hence for 𝑖 =
1,2,3,4,
𝑝𝑖𝑖 (𝑛) = 0 for n ≠ 3,6,9, … . ,
𝑝𝑖𝑖 (𝑛) ≠ 0 for n = 3,6,9, … . ,
Which means that all states are period 3.
(c) Persistent state

Let 𝒇𝒋 (𝒏) be the probability that the first return or visit to 𝑿𝒋 occurs at the 𝒏 −th
step. This probability is not same as 𝑝𝑖𝑗 (𝑛) which is the probability that a return
occurs at the 𝑛 −th step, and includes possible returns at steps 1,2,3,…. , 𝑛 − 1
also. It follows that
𝑝𝑗𝑗 1 = 𝑝𝑗𝑗 = 𝑓𝑗 (1) ,
𝑝𝑗𝑗 (2) = 𝑓𝑗 (2) +𝑓𝑗 (1) 𝑝𝑗𝑗 (1)
𝒑𝒋𝒋 (𝟑) =𝒇𝒋 (𝟑) + 𝒇𝒋 (𝟏) 𝒑𝒋𝒋 (𝟐) + 𝒇𝒋 (𝟐) 𝒑𝒋𝒋 (𝟏) (𝟏)
and, in general
𝑝𝑗𝑗 (𝑛) = 𝑓𝑗 (𝑛) + σ𝑛−1
𝑟=1 𝑓𝑗
(𝑟)
𝑝𝑗𝑗 (𝑛−𝑟) , 𝑛 ≥ 2

The term in (1) imply that the probability of a return at the third step is the
probability of a first return at the third step ,or the probability of first return at the
first step and a return two steps later, or the probability of first return at the
second step and a return one step later.
The foregoing eqs. become iterative formulas for the sequence of first returns
𝑓𝑗 (𝑛) :
𝑓𝑗 (1) = 𝑝𝑗𝑗 (1) = 𝑝𝑗𝑗 ,
𝑛−1

𝑓𝑗 (𝑛) = 𝑝𝑗𝑗 (𝑛) − ෍ 𝑓𝑗 (𝑟) 𝑝𝑗𝑗 (𝑛−𝑟)


𝑟=1

The probability that a chain returns at some step to the state 𝑋𝑗 is


𝑓𝑗 = ෍ 𝑓𝑗 (𝑛)
𝑛=1
If 𝑓𝑗 = 1, then a return to 𝑋𝑗 is certain , and 𝑋𝑗 is called a persistent state.

# A three-state Markov chain has the transition matrix


𝑝 1−𝑝 0
𝑃= 0 0 1
1−𝑞 0 𝑞
where 0 < 𝑝 < 1, 0 < 𝑞 < 1. Show that the state 𝑋1 is persistent.
For simple chains a direct approach using the
transition diagram is often easier than the formula
for 𝑓𝑗 (𝑛) . For this example the transition diagram is
shown in Fig.3.

If a sequence starts at 𝑋1, then it can be seen that


first returns to 𝑿𝟏 can be made to 𝑋1 at every step
except for 𝑛 = 2, since after two steps the chain
must be in state 𝑋3. From the Fig.3 it can be argued
that
𝑓1 (1) = 𝑝, 𝑓1 (2) = 0 , 𝑓1 (3) = 1 − 𝑝 . 1. 1 − 𝑞 ,
𝑓1 (𝑛) = 1 − 𝑝 . 1. 𝑞 𝑛−3 . 1 − 𝑞 , 𝑛 ≥ 4. Fig.3 transition diagram
The last result for 𝑓1 (𝑛) for 𝑛 ≥ 4 follows from the
following sequence of transitions:
𝑛 − 3 𝑡𝑖𝑚𝑒𝑠

𝑋1𝑋2 𝑋3𝑋3 … … 𝑋3 𝑋1
The probability 𝑓1 that the system returns at least once to 𝑋1 is
∞ ∞

𝑓1 = ෍ 𝑓1 (𝑛) = 𝑝 + ෍ (1 − 𝑝)(1 − 𝑞)𝑞 𝑛−3


𝑛=1 𝑛=3

= 𝑝 + 1 − 𝑝 1 − 𝑞 ෍ 𝑞𝑠 , 𝑠=𝑛−3
𝑠=0
1
=𝑝+ 1−𝑝 1−𝑞 =1
1−𝑞
Hence 𝑓1 = 1, and consequently the state 𝑋1 is persistent.

Persistent states can have a further distinction. The mean recurrence time 𝜇𝑗 pf a persistent
(𝑛)
state 𝑋𝑗 , for which σ∞
𝑛=1 𝑓𝑗 =1, is given by

𝜇𝑗 = ෍ 𝑛𝑓𝑗 (𝑛) .
𝑛=1
In the above example, the state 𝑋1 is persistent and its mean recurrence time is given by
∞ ∞
3 − 2𝑞
𝜇1 = ෍ 𝑛𝑓1 (𝑛) = 𝑝 + (1 − 𝑝)(1 − 𝑞) ෍ 𝑛𝑞 𝑛−3 = 𝑝 + 1 − 𝑝 1 − 𝑞
1−𝑞 2
𝑛=1 𝑛=3
3 − 2𝑝 − 2𝑞 + 𝑝𝑞
=
1−𝑞
Which is finite. For some chains, however, the mean recurrence time can be infinite.; in other
words , the mean number of steps to a first return is unbounded.
# A three-state Markov chain has the transition matrix
1ൗ 1ൗ 0
2 2
𝑃𝑛 = 0 0 1
1ൗ 𝑛ൗ
𝑛+1 0 𝑛+1
𝑛
where 𝑃 is the transition matrix at step 𝑛. Show that 𝑋1 is a persistent null state.

Fig.4 The transition diagram

Sol.: The transition diagram is shown in Fig.4. from the Fig.4


1 1 1
𝑓1 (1) = , 𝑓1 (2) = 0 , 𝑓1 (3) = . 1.
2 2 4
(𝑛) 1 3 4 𝑛−1 1 3
𝑓1 = . 1. . … . = , (𝑛 ≥ 4)
2 4 5 𝑛 𝑛 + 1 2𝑛 𝑛 + 1
Hence

1 1 3 1
𝑓1 = + + ෍
2 8 2 𝑛(𝑛 + 1)
𝑛=4
Since
1 1 1
= − ,
𝑛(𝑛 + 1) 𝑛 𝑛 + 1
It follows that
∞ 𝑁
1 1
෍ = lim ෍ 1ൗ𝑛 − 1ൗ𝑛 + 1 = lim 1ൗ4 − 1ൗ𝑁 + 1 =
𝑛(𝑛 + 1) 𝑁→∞ 𝑁→∞ 4
𝑛=4 𝑛=4
Hence
5 3
𝑓1 = + = 1
8 8
which means that 𝑋1 is persistent. On the other hand mean recurrence time
∞ ∞ ∞
7 3 𝑛 7 3 1 1 7 3 1
𝜇1 = ෍ 𝑛𝑓1 (𝑛) = + ෍ = + + + ⋯. = + ෍
8 2 𝑛(𝑛 + 1) 8 2 5 6 8 2 𝑛
𝑛=1 𝑛=4 𝑛=5
The series in the previous equation is the harmonic series
∞ ∞
1 1 1 1 1
෍ = 1+ + + +෍
𝑛 2 3 4 𝑛
𝑛=1 𝑛=5
Minus the first four terms. The harmonic series is divergent series, which means that
𝜇1 = ∞. Hence 𝑋1 is persistent and null.
(d) Transient state

For a persistent state the probability of a first return at some step in the future is
certain. For some states

𝑓𝑗 = ෍ 𝑓𝑗 (𝑛) < 1
𝑛=1
Which means that the probability of a first return is not certain. Such states are
described as transient.

# A four state Markov chain has the transition


matrix
0 0.50 0.25 0.25
𝑃 = 0.50 0.50 0.0 0.0
0.0 0.0 1.0 0.0
0.0 0.0 0.50 0.50
Show that 𝑋1 is a transient state.

Fig.5 Transition diagram


The transition diagram is shown in Fig.5. from the figure
2 3 𝑛
1 1 1 1 1
𝑓1 (1) = 0, 𝑓1 (2) = . = , 𝑓1 (3) = , 𝑓1 (𝑛) = .
2 2 2 2 2
Hence
∞ ∞ 𝑛
(𝑛) 1 1
𝑓1 = ෍ 𝑓1 =෍ = <1
2 2
𝑛=1 𝑛=2
implying that 𝑋1 is a transient state. The reason for the transience of 𝑋1 can be seen
from Fig.5 where transitions from 𝑋3 or 𝑋4 to 𝑋1 or 𝑋2 are not possible.

(e) Ergodic state

An important state which is persistent, nonnull, and aperiodic is called ergodic.


Ergodic states are important in the classification of chains , and the existence of
limiting probability distributions.

# A three-state Markov chain has the transition matrix


𝑝 1−𝑝 0
𝑃= 0 0 1
1−𝑞 0 𝑞
where 0 < 𝑝 < 1, 0 < 𝑞 < 1. Show that the state 𝑋1 is ergodic.
Sol.: For this problem , it is known that
𝑓1 (1) = 𝑝, 𝑓1 (2) = 0, 𝑓1 (𝑛) = 1 − 𝑝 1 − 𝑞 𝑞 𝑛−3 , 𝑛 ≥ 3
It follows that its mean recurrence time is

3 − 2𝑝 − 2𝑞 + 𝑝𝑞
𝜇1 = ෍ 𝑛𝑓1 (𝑛) = <∞
1−𝑞
𝑛=1
The convergence of 𝜇1 implies that 𝑋1 is nonnull. Also the diagonal elements 𝑝𝑖𝑖 (𝑛) >
0 for 𝑛 ≥ 3 and 𝑖 = 1,2,3 which means that 𝑋1 is aperiodic. Hence from the
definition above 𝑋1 is ergodic.

CLASSIFICATION OF CHAINS
So far we considered some defining properties of individual states. We, next,
define properties of chains which are common properties of states in the chain.

(a) Irreducible chains


An irreducible chain is one in which every state can be reached or is accessible from
every other state in the chain in a finite number of steps. That any state 𝑿𝒋 can be
reached from any other state 𝑿𝒊 means that 𝒑𝒊𝒋 (𝒏) > 𝟎 for some integer 𝒏.
A matrix 𝐴 = 𝑎𝑖𝑗 is said to be positive if 𝑎𝑖𝑗 > 0 for all 𝑖, 𝑗. A Markov chain with
transition matrix 𝑷 is said to be regular if there exists an integer 𝑁 such that 𝑷𝑵 is
positive.

# Show that the three state chain with transition matrix


1 1 1
𝑃= 3 3 3
0 0 1
1 0 0
defines a regular ( and hence irreducible ) chain.

Sol.: For the transition matrix 𝑃


16 4 7
4 1 4
27 27 27
9 9 9 1 1 1
2
𝑃 = 1 0 0 ,𝑃3 =
1 1 1 3 3 3
4 1 4
3 3 3
9 9 9
3
Hence 𝑃 is a positive matrix which means that chain is regular.
An important feature of an irreducible chain is that all its states are of the same
type, that is, either all transient or all persistent (null or nonnull) , or all have the
same period. A proof of this is given in Feller (1968,p.391). This means that the
classification of all states in an irreducible chain can be inferred from the known
classification of one state. It is intuitively reasonable to infer also that the states of
a finite irreducible chain cannot all be transient since it would mean that a return
to any state would not be certain even though all states are accessible from all
other states in a finite number of steps. This require a proof which will not be
included here.

(b) Closed sets

A Markov chain may contain some states which are transient, some which are
persistent, absorbing states, and so on. The persistent states can be part of
closed subchains. A set of states 𝑪 in a Markov chain is said to be closed if any
state within 𝑪 , and no state outside 𝑪 can be reached from any state inside 𝑪.
Algebraically a necessary and sufficient condition for this to be the case is that
𝒑𝒊𝒋 = 𝟎 ∀ 𝑿𝒊 ∈ 𝑪 𝒂𝒏𝒅 ∀ 𝑿𝒋 ∉ 𝑪.
An absorbing state is closed with just one state. Note also that a closed subset
is itself an irreducible subchain of the full Markov chain.
# Discuss the status of each state in the 6-state Markov chain with transition matrix
0.5 0.5 0 0 0 0
0.25 0.75 0 0 0 0
𝑃 = 0.25 0.25 0.25 0.25 0 0
0.25 0 0.25 0.25 0 0.25
0 0 0 0 0.50 0.50
0 0 0 0 0.50 0.50

A diagram representing the chain is shown in Fig.6. As usual the figure is a


great help in settling questions of which sets
of states are closed. It can be seen that
𝐸1 , 𝐸2 is a closed irreducible subchain
since no states outside the state can be
reached from 𝐸1 𝑎𝑛𝑑 𝐸2 . Similarly 𝐸5 , 𝐸6
is a closed irreducible subchain. The states
𝐸3 and 𝐸4 are transient. All states are
periodic which means that 𝐸1 , 𝐸2 , 𝐸5 , and
𝐸6 are ergodic.

Fig.6
(c) Ergodic chains

As we have seen all the states in an irreducible chain belong to the same class. If all
states are ergodic , that is , persistent , nonnull , and aperiodic , then the chain is
described as an ergodic chain.

# Show that all states of the chain with transition matrix


𝟏ൗ 𝟏ൗ 𝟏ൗ
𝟑 𝟑 𝟑
𝑷= 𝟎 𝟎 𝟏
𝟏 𝟎 𝟎
are ergodic.

Sol. This chain has been considered earlier where it was shown to be irreducible and
irregular, which means that all states must be persistent, nonull, and aperiodic.
Hence all states are ergodic.
# Consider the three –state Markov chain with transition matrix
𝟏ൗ 𝟒ൗ
𝟓 𝟓 𝟎
𝐏= 𝟎 𝟎 𝟏 .
𝟏 𝟎 𝟎
Show that all states are ergodic. Find the eigenvalues of 𝐏 and 𝐐 = 𝐥𝐢𝐦 𝐏 𝐧 .
𝐧→∞
Determine the mean recurrence times 𝛍𝟏 , 𝛍𝟐 , 𝛍𝟑 for each state, and conform that
the rows of 𝐐 all have elements 𝟏Τ𝛍𝟏 , 𝟏Τ𝛍𝟐 , 𝟏Τ𝛍𝟑 .

Sol.: It is easy to check that 𝑷𝟒 is a positive matrix, which implies that the chain is
ergodic. The eigen values of 𝑷 are given by
𝟏 𝟒
−𝝀 𝟎 𝟏 𝟐 𝟒
𝒅𝒆𝒕 𝑷 − 𝝀𝑰𝟑 = 𝟓 𝟓 𝟑
= −𝝀 + 𝝀 +
𝟎 −𝝀 𝟏 𝟓 𝟓
𝟏 𝟎 −𝝀
𝟏
= 𝟏 − 𝝀 𝟓𝝀𝟐 + 𝟒𝝀 + 𝟒 = 𝟎.
𝟓
Hence the eigenvalues can be denoted by
𝟐 𝟒 𝟐 𝟒
𝝀𝟏 = 𝟏, 𝝀𝟐 = − + 𝒊 , 𝝀𝟑 = − − 𝒊
𝟓 𝟓 𝟓 𝟓
The corresponding eigenvalues are
2 4 2 4
1 − + 𝑖 − − 𝑖
5 5 5 5
𝑟1 = 1 , 𝑟2 = 1 , 𝑟3 = 1
1 − −𝑖 − +𝑖
2 2
1 1

The matrix 𝐶 may be defined as earlier:


2 4 2 4
1 − + 𝑖 − − 𝑖
5 5 5 5
𝐶 = 𝑟1 𝑟2 𝑟3 = 1 1
1 − −𝑖 − +𝑖
2 2
1 1 1
The computed inverse is given by
1 20 16 16
−1
𝐶 = −10 − 30𝑖 −8 + 14𝑖 18 + 𝑖
52
−10 + 30𝑖 −8 − 14𝑖 18 − 𝑖
Therefore
1 0 0
𝑛
2 4
0 − + 𝑖 0
𝑛
𝑃 =𝐶 5 5 𝐶 −1
𝑛
2 4
0 0 − − 𝑖
5 5
Hence
1 0 0 1 5 4 4
𝑛 −1
𝑄 = lim 𝑃 = 𝐶 0 0 0 𝐶 = 5 4 4
𝑛→∞ 13
0 0 0 5 4 4
5 4 4
The invariant distribution is therefore 𝑝Ԧ = 13 13 13 .

The first returns 𝑓𝑖 (𝑛) for each of the states can be easily calculated from the
transition diagram, Fig.7. Thus
(1) 1 (2) (3) 4 4
𝑓1 = , 𝑓1 = 0, 𝑓1 = . 1.1 = ,
5 5 5
4 1 𝑛−3
𝑓2 (1) = 𝑓3 (1) = 0, 𝑓2 (2) = 𝑓3 (2) = 0, 𝑓2 (𝑛) = 𝑓3 (𝑛) = 5 5 ,n≥ 3

Hence

1 4 13
𝜇1 = ෍ 𝑛𝑓1 (𝑛) = +3 =
5 5 5 Fig.7
𝑛=1
∞ ∞ 𝑛−3
(𝑛) 4 1 13
𝜇2 = 𝜇3 = ෍ 𝑛𝑓2 = ෍𝑛 =
5 5 4
𝑛=1 𝑛=3
The vector reciprocals
1 1 1 5 4 4
=
𝜇1 𝜇2 𝜇3 13 13 13
Agrees with the vector 𝑝Ԧ above calculated by the eigenvalue method.

END

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy