Discretization of The Markov Regime Switching AR (1) Process: Yan Liu Wuhan University
Discretization of The Markov Regime Switching AR (1) Process: Yan Liu Wuhan University
Discretization of The Markov Regime Switching AR (1) Process: Yan Liu Wuhan University
Abstract
We propose a simple and theory based method to discretize the Markov regime switch-
ing AR(1) process into a first order Markov chain. The method is based on closed form
expressions of the regime conditional first and second moments of the process, in conjunc-
tion with the Rouwenhorst method for constructing a proper state space and transition
matrix. The resulting discrete Markov chain exactly replicates the regime conditional
and unconditional means and variances, and the regime conditional autocorrelations, of
the original process. The benchmark method is subject to a bias in the unconditional au-
tocorrelation approximation; however, simulation results show that the magnitude of the
bias is small. At a cost of compromising regime conditional autocorrelations accuracy,
two modifications of the benchmark method with respect to construction of the tran-
sition matrix may improve the unconditional autocorrelation approximation with other
moments unaffected, especially when the original process is persistent unconditionally.
Key words: Discretization, Markov regime switching process.
JEL codes: C63.
I Introduction
A most widely used approach in solving recursive models in economics is the discrete approxi-
mation of the original dynamic programming problems. This approach requires discretization
of the state space of the original model, which can be continuous in both endogenous state
variables and exogenous shock variables. It is typical to model a shock process as a Markov
process, especially an AR(1) process. Since AR(1) processes are easy to estimate and provide
a simple way of capturing persistent effect of shocks, it is no surprise that such a choice
∗
Department of Finance, Economics and Management School. Email: yanliu.ems@whu.edu.cn. I am
grateful to comments from participants of 3rd HenU/INFER Workshop on Applied Macroeconomics, and to
the financial support from NSFC grant (No. 71503191). Usual disclaimer applies.
1
becomes predominant in applications of recursive methods. Correspondingly, a host of dis-
cretization methods for an AR(1) process is available in the literature, with famous ones such
as Tauchen (1986), Tauchen and Hussey (1991), Deaton (1991), and Rouwenhorst (1995).1 In
particular, Rouwenhorst’s method provides an almost complete solution to the discretization
problem of an AR(1) process, in the sense that it delivers exact replications of the most con-
cerned first and second moments, both conditional and unconditional, of the AR(1) process
under consideration.
However, most of the seminal discretization methods deals only with the AR(1) process,
with a few extensions to the discretization of VAR processes at most. Such a situation poses
an obstacle for researchers attempting to model the shock process with an alternative Markov
process other than the AR(1) process. One such case occurs in the field of macroeconomics.
Ever since the seminal work of Hamilton (1989), the use of the autoregressive models with
Markov regime switching coefficients is pervasive, in both empirical and theoretical works; see
Hamilton (2015) for an up-to-date survey. Despite its widely use in many areas in macroeco-
nomics, there is by far no theory-based method for the discretization of such a process, which
we believe have became an restriction of using the otherwise important process in formulating
and solving recursive dynamic models.2
In this paper, we propose a theory-based method to discretize a Markov regime switching
AR(1) process, henceforth MRS AR(1). The method is based on an analytical characteriza-
tion of the conditional moments of the MRS AR(1) process. In particular, we give closed form
expressions for the regime-conditional mean, variance, and autocorrelation of the MRS AR(1)
process. Based on thse regime-conditional moments, we modify the Rouwenhorst method to
construct an appropriate discrete Markov chain which exactly replicate these moments. The
method ensure the local dynamic properties, as captured by the conditional moments, of the
MRS AR(1) process to be approximated accurately. In addition, we show that the discretiza-
tion also replicates the unconditional mean and variance of the MRS AR(1) process, while
the unconditional autocorrelation differs in general. The latter property is a natural outcome
of preserving the local dynamics of the MRS AR(1) process, and further numerical exam-
ples illustrate the bias in the unconditional autocorrelation to be small. Taking together,
the paper provides the first and an easy-to-implement discretization method for the popular
Markov regime switching AR(1) process.
1
Deaton’s method is made widely known by the textbook of Adda and Cooper (2003). More recent works on
this topic include Flodén (2008), Kopecky and Suen (2010), Galindev and Lkhagvasuren (2010), Gospodinov
and Lkhagvasuren (2014), Tanaka and Toda (2013, 2015), and Farmer and Toda (2017).
2
Bai and Zhang (2010) is one of a few quantitative works modeling a shock process with a fully specified
AR(1) Markov regime switching process. However, they discretize the process in an ad hoc way. Storeslet-
ten, Telmer, and Yaron (2004) is another example of incorporating Markov regime switching process into
quantitative works, yet they only consider regime switching in the variance of the innovation, and thus the
discretization difficulty is non-essential.
2
The paper proceeds as follows. Section 2 introduce the basic setup. Section 3.1 provides
the closed form expressions for various moments of the MRS AR(1) process; section 3.2 shows
how to modify the Rouwenhorst for the purpose of our discretization; section 3.3 describes the
discretization method; section 3.4 discusses the properties of the discretized Markov chain;
section 3.5 devotes to further discussion on the bias in the unconditional autocorrelation.
Section 4 contains several numerical examples.
II Basic Setup
Consider the following simple model of an AR(1) process with regime switching
where regime St follows a Markov chain with state space S ≡ {1, . . . , K}, and ρ(·), µ(·)
iid
and σ(·) are functions of St . We assume that εt ∼ N (0, 1) and the Markov chain {St } is
homogeneous and ergodic.3 It follows that the range X of Xt , i.e., the state space, is the
entire real line R. One crucial assumption for tractability, as stressed in Hamilton (1990,
p. 43), is that St be independent of Xt−1 when conditioning on St−1 . For the investigation of
the discretization method in the next section, the assumption of εt having normal distribution
can be relaxed to: (i) εt is iid with zero mean and unit variance; (ii) has strictly positive
density over its support; and (iii) induces conditional independence between St and Xt−1 .
No result is affected by the specific distributional assumption on εt .
Note that for the degenerate case in which Sτ ≡ k ∈ S, ∀τ ∈ Z, Xt is a standard
AR(1) process, hence a Markov chain with state space X . However, given St follows a
Markov chain, there exists no simple expression of the distribution of Xt conditional on Xt−1
alone. Nonetheless, it is possible to derive the joint transition kernel for (Xt , St ), upon which
probabilistic properties of Xt can be investigated. To begin with, we use a succinct notation
p(·) to denote both the density and probability of a continuous and discrete distribution
respectively. In this way, first note that the autoregressive equation (1) implies the following
conditional distribution of Xt on Xt−1 and St :
3
In general, we call a Markov chain to be ergodic if it is irreducible, aperiodic and positive recurrent;
cf. Meyn and Tweedie (2009, ch. 13) for terminology. We note in passing that when a chain is finite, then
irreducibility implies recurrence, hence positive recurrence.
3
Second, let pkℓ ≡ p(St = ℓ|St−1 = k), 1 ≤ ℓ, k ≤ K, denote the regime transition
probabilities, and correspondingly P ≡ [pkℓ ] the transition matrix. It is then straightforward
to show that
which can be easily verified as a Markov transition kernel defined over the product state space
X × S. It is worth to remind that the derivation rests on two facts: (i) the distribution of Xt
does not dependent on St−1 when conditioning on Xt−1 and St and (ii) St is independent of
Xt−1 conditional on St−1 .
By ergodicity, St has a unique invariant distribution π = (π1 , . . . , πK ) and πk > 0 for all
k. Given that εt ∼ N (0, 1) and St is ergodic, the transition kernel defined in (2) guarantees
4
that the joint process (Xt , St ) is ergodic as well. This fact follows directly from the results
of Yao and Attali (2000, thm. 1), which deals with a general nonlinear MRS autoregressive
processes. To be more specific, Yao and Attali (2000) identify three conditions for the er-
godicity of (Xt , St ): (i) a moment condition on εt , i.e., E|εt |m < ∞ for some m > 0; (ii)
∑
a stability condition on ρ(·), i.e., k πk log |ρ(k)| < 0;5 and (iii) an irreducibility condition.
The moment condition is clearly satisfied in our setup, and we shall assume the stability
condition holds throughout the paper. Note that the stability condition allows |ρ(ℓ)| > 1 for
some ℓ, as long as the mean of log |ρ(k)| is negative. For the third condition, irreducibility
of St and strict positivity of the density of εt guarantee φ-irreducibility of (Xt , St ) where φ
is the product measure over X × S (i.e., counting ⊗ Lebesgue). The ergodicity of the joint
Markov chain {(Xt , St )} implies a unique invariant distribution ν(·, ·) over X × S.
In what follows, we shall assume (Xt , St ) starts from ν, and all moments related to Xt
are taken under ν, denoted by Eν . Meanwhile, we also use Eπ whenever it is desirable to
indicate the expectation being taken for St under π. It is worth to stress that {Xt } alone is
4
Under the stated assumptions, St admits a unique invariant distribution, which is also the marginal
distribution of ν on St . To be precise, the main results derived in the paper only require the existence of
one invariant distribution of St that is strictly positive for all regimes. This condition in turn is equivalent
to (positive) recurrence of St ; see Timmermann (2000) and Yang (2000) for related works under similar
conditions. Instead, working under the ergodic assumption allows us to be more precise on the probabilistic
properties of Xt and its approximation. In addition, ergodic chains of St prevail in econometric analyses of
MRS processes, either as assumptions or estimation results.
5
It is understood that if ρ(k) = 0 for some k, then log ρ(k) = −∞ and E log ρ(St ) = −∞ as π(k) > 0. The
sufficiency of the specific stability condition dates back to Brandt (1986), and Bougerol and Picard (1992)
establish the converse for case where St is iid.
4
not Markovian, despite that {(Xt , St )} constitutes a Markov chain jointly.6
III Discretization
We now describe how to approximate the joint process {(Xt , St )} by a finite Markov chain.
Since the regimes are already discrete, the basic idea is to find a set of discrete state space
Z(k) = {zi (k)} for each regime k, and then compute the associated transition probability
both within and cross regimes given Z = ∪k Z(k). The result out of these two steps is a finite
Markov chain {Zt } with state space Z and transition matrix Q. The target is not only to
have (Z, Q) approximate well the stationary properties of Xt , but perhaps even more so to
ensure the conditional dynamics of Xt given St is captured by (Z, Q). Thus, it is important
to account for the difference among the regime conditional distributions in both choosing
Z(k) and computing Q.
From both the practical and theoretical perspective, it is convenient, and very often
sufficient, to capture the main properties of conditional distributions through the first and
second moments (including the autocorrelations). In what follows, we first present closed form
formulas for the first two moments of Xt , conditional on each regime St . Then we utilize the
conditional moments to determine state space Z and transition matrix Q, through a slightly
generalized Rouwenhorst method.
III.A Moment Formula
To begin with, we first present closed form expressions for the regime conditional mean
Eν [Xt |St ] and variance var(Xt |St ).7 Let Eν [Xt |S] and Eν [Xt2 |S] denote the column vectors of
Eν [Xt |St = k] and Eν [Xt2 |St = k] for k = 1, . . . , S respectively, then we have:
( ( )−1 )′
Eν [Xt |S] = diag−1 (π) πdiag(µ)diag(ι − ρ) I − P diag(ρ) ,
( ) ( ( )( ( ))−1
Eν Xt2 St = S = diag−1 (π) πdiag σ 2 I − P diag ρ2
( ) ( )( ( ))−1
+ πdiag µ2 diag (ι − ρ)2 I − P diag ρ2
( )−1
+ 2πdiag(µ)diag(ι − ρ) I − P diag(ρ) P diag(ρ)
( ( ))−1 )′
· diag(µ)diag(ι − ρ) I − P diag ρ2 .
In the above expressions, we adopt the following notation definition: for any vector a =
[a1 , . . . , aK ], diag(a) denotes the diagonal matrix where the i’th diagonal element is ai , a2
denotes [a21 , . . . , a2K ], and ι = [1, . . . , 1]. Accordingly, we have closed form expression for the
conditional variance var(Xt |St = k) = Eν [Xt2 |St = k] − Eν [Xt |St = k]2 .
6
Indeed, Francq and Zakoı̈an (2001) and Zhang and Stine (2001) prove that, up to the covariance structure
alone and ignoring higher order moments, Xt has an ARMA(p, q) representation, where p, q ≥ K − 1 depends
on the number of regimes K. In particular, whenever K ≥ 3, p, q ≥ 2, so that Xt is necessarily not first-order
Markovian, as Xt−2 affects the firs two moments of Xt .
7
See Liu (2015) for the relevant derivations.
5
In addition, in the benchmark method, we use the regime conditional autocorrelation
to capture the local dynamic property of Xt . In specific, conditioning on two consecutive
regimes (St+1 , St ), the conditional first order autocorrelation
cov(Xt+1 Xt |St+1 , St )
ϕ(St+1 , St ) = √
var(Xt+1 |St+1 , St )var(Xt |St+1 , St )
characterizes the persistence of Xt across the given regimes. To calculate the related condi-
tional moment, we note that all coefficients of the AR(1) equation (1) become constant once
conditioning on St+1 , hence the randomness of Xt+1 only comes from Xt and εt+1 . As a
result, it is straightforward to derive
where we have used the fact that St+1 and Xt are independent conditional on St , which also
implies var(Xt |St+1 , St ) = var(Xt |St ). As a result, the conditional autocorrelation can be
written as
ρ(St+1 )
ϕ(St+1 , St ) = √ . (5)
ρ2 (St+1 ) + σ 2 (St+1 )/var(Xt |St )
6
Let us first fix the number of discrete values W ∗ and Y ∗ to be the same N ≥ 2. Denote
µW and µY the means, and σW and σY the variances, of W and Y . Following Rouwenhorst
method, we choose the N values {w1 , . . . , wN } of W ∗ to be equally spaced between µW −
√ √
σW N − 1 and µW +σW N − 1, and choose {y1 , . . . , yN } in exactly the same way. Next, we
( −1)
choose the marginal distribution of W ∗ to be binomial, i.e., Pr(W ∗ = wi ) = 2−(N −1) Ni−1 =
−(N −1) (N −1)! ∗ ∗ ∗
2 (i−1)!(N −i)! , and the conditional distribution of Y to be Pr(Y = yj |W = wi ) = λij ,
where λij is the (i, j)’th element of a matrix Λ. The key ingredient of Rouwenhorst method
is the recursive construction of Λ:
bn
where 0 is a column vector of zeros, then divide all except the first and last row of Λ
by 2, and the resulting matrix is Λn .
Rouwenhorst (1995) points out that Λ is a Markov transition matrix, thus each row indeed
{ ( −1)}
gives rise to a conditional distribution. Moreover, the binomial distribution 2−(N −1) Ni−1
constitutes the unique ergodic distribution of Λ, therefore the marginal distribution of Y ∗ is
( −1)
the same binomial as Pr(Y ∗ = yi ) = 2−(N −1) Ni−1 . It is then not difficult to demonstrate
that the above constructions satisfy EW ∗ = µW , EY ∗ = µY , varW ∗ = σW 2 , varY ∗ = σ 2 , and
Y
∗ ∗
cov(W , Y ) = ϕσW σY . It is worth to stress that these properties are independent on the
number of states N chosen at the beginning.
Rouwenhorst’s original paper dose not contain formal proofs on these facts; see Kopecky
and Suen (2010) for formal proofs in the setting of a Markov chain,9 which can be identified
with the condition of µW = µY and σW = σY in the current more general setting. All the
proofs directly apply to our setting because (i) the state spaces of W ∗ and Y ∗ differ only by
a scaling factor after removing the means and (ii) the marginal distributions are the same.
9
Especially the appendix of their working paper version.
7
III.C Discretizing the MRS Process
We are now ready to construct the discrete state space Z and the transition matrix Q. Let
N ≥ 2 be the fixed number of states chosen for all regimes, so that there are N K states in
total for the approximating Markov chain. For each regime k, let Z(k) = {z1 (k), . . . , zN (k)}
√
be N points equally spaced from Eν (Xt |St = k) − var(Xt |St = k)(N − 1) to Eν (Xt |St =
√
k) + var(Xt |St = k)(N − 1).
Let Qkℓ denote an N ×N matrix and be the (k, ℓ)’th square block of Q, with 1 ≤ k, ℓ ≤ K.
Each Qkℓ equals to pkℓ ΛN (ϕ(k, ℓ)), where pkℓ is the transition probability from regime k to
ℓ, ΛN (ϕ(k, ℓ)) is the Rouwenhorst transition matrix with a correlation of ϕ(k, ℓ), and lastly
ϕ(k, ℓ) denotes the conditional autocorrelation ϕ(St+1 , St ) with St = k and St+1 = ℓ. It is
readily verifiable that the so constructed Q is indeed a Markov transition matrix, and its
unique invariant distribution η has a simple structure. Let
[ ( ) ( )]
−(N −1) N −1 −(N −1) N −1
ξ= 2 ··· 2
1−1 N −1
denote the vector of N binomial probabilities, and recall π is the invariant distribution of P .
Straightforward calculation then shows that the following row vector
η = π ⊗ ξ = (π1 , . . . , πK ) ⊗ (ξ1 , . . . , ξN )
satisfies η = ηQ, thus η is the unique invariant distribution of Q. We shall use Eη to denote
the expectation of Zt under η.
III.D Properties of the Discretization
By construction, the discrete Markov chain (Z, Q) exactly replicates the conditional mean
and variance of Xt on each regime, i.e., Eν (Xt |St ) = Eη (Zt |St ) and var(Xt |St ) = var(Zt |St ),
where conditional moments such as Eη (Zt |St = k) refer to Zt taking values in Z(k) under the
marginal distribution derived from ξ. It follows directly that the chain also replicates the
unconditional mean, as
A less straightforward fact is that the chain also replicates the unconditional variance of
Xt . To show this, note that varY = EY 2 −(EY )2 for any random variable Y and Eν Xt = Eη Zt ,
therefore we only need to verify Eν Xt2 = Eη Zt2 . The last equality can be established as follows:
( )
Eν Xt2 = Eπ Eν (Xt2 |St ) = Eπ var(Xt |St ) + [Eν (Xt |St )]2
( )
= Eπ var(Zt |St ) + [Eη (Zt |St )]2 = Eπ Eη (Zt2 |St ) = Eη Zt2 .
8
It is worth to stress that the unconditional variance varXt does not equal to the average of
the conditional variance var(Xt |St ) weighted by π, and consequently one can not conclude
varZt = varXt simply from the fact that var(Zt |St ) = var(Xt |St ).10
The case for unconditional auto-correlations is more involved. As unconditional variance,
identical autocorrelations of Xt and Zt conditional on consecutive regimes (St+1 , St ) do not
imply identical unconditional autocorrelations. Actually, we shall demonstrate in a moment
that the unconditional autocorrelation ρX and ρZ do not equal with each other in general.
Since var(Xt ) = var(Zt ) and the autocorrelation equals to the autocovariance divided
by the variance, it suffices to consider the autocovariance of Xt and Zt . Moreover, since
Eν Xt = Eη Zt , cov(Xt+1 , Xt ) = Eν Xt+1 Xt − [Eν Xt ]2 under the invariance distribution, and
cov(Zt+1 , Zt ) = Eη Zt+1 Zt − [Eη Zt ]2 analogously, we only need to compare Eν Xt+1 Xt and
Eη Zt+1 Zt . In turn, these two unconditional moments are related via the conditional moments.
To spell out the detail, first note that
and since var(Zt |St+1 , St ) = var(Zt |St ), var(Zt+1 |St+1 , St ) = var(Zt+1 |St+1 ), it follows that
√
cov(Zt+1 , Zt |St+1 , St ) = ϕ(St+1 , St ) var(Zt |St )var(Zt+1 |St+1 ).
Employing the definition of ϕ(St+1 , St ) in (3)–(5) and the fact that var(Zt |St ) = var(Xt |St ),
we can write the last expression as
cov(Xt+1 , Xt |St+1 , St )
cov(Zt+1 , Zt |St+1 , St ) = √ .
var(Xt+1 |St+1 , St )/var(Xt+1 |St+1 )
9
so that cov(Xt+1 , Xt |St+1 , St ) = χ(St+1 , St )cov(Zt+1 , Zt |St+1 , St ). As a result,
in general. Recall that Eη Zt+1 Zt = Eπ Eη (Zt+1 Zt |St+1 , St ), it is thus evident from (6) that
Eν Xt+1 Xt ̸= Eη Zt+1 Zt in general as well. To illustrate, consider a particular case with
µ(·) ≡ 0, so that Eν Xt = Eη Zt = 0, hence Eν Xt+1 Xt = Eπ χ(St+1 , St )Eη (Zt+1 Zt |St+1 , St ).
Observe that χ(St+1 , St ) is generically correlated with Eη (Zt+1 Zt |St+1 , St ), thus only under
rather special circumstances will Eν Xt+1 Xt = Eη Zt+1 Zt hold.
III.E Further Discussion
A careful check of the derivation of (6) reveals the reasons underlying the disparity between
Eν Xt+1 Xt and Eη Zt+1 Zt . The first reason lies in the fact that χ(St+1 , St ) ̸= 1 in general, or
more precisely, the conditional variance of Xt
differs from var(Zt+1 |St+1 , St ) = var(Zt+1 |St+1 ), which then equals to var(Xt+1 |St+1 ). The
second reason is similar to the first one in that
differs from Eη (Zt+1 |St+1 , St ) = Eη (Zt+1 |St+1 ) = Eν (Xt+1 |St+1 ). Both facts points to the
nature of the problem: the distribution of Xt+1 conditional St+1 is affected by St (indeed, on
St−j for all j ≥ 0), or alternatively, past regimes always contain useful information regarding
current Xt ; in contrast, distribution of Zt+1 conditional on St+1 is fixed by construction
irrespective of any past regime St−j , j ≥ 0.
The latter property is present whenever the state space chosen for the discretization
targets only the conditional distribution of Xt on current regime St only, so that the impact
of past regimes disappears, while an essential feature of the MRS autoregressive process is
the influence of the entire history of regimes on current observation. To what extent such a
deficiency will jeopardize the performance of the discretized Markov chain ultimately depends
10
on the specific setting where discretization is required. Nonetheless, it seems justifiable to
us to first target local properties, i.e., regime specific, of the MRS process in question, as
the very advantage of an MRS model is to provide a simple yet flexible setup for capturing
heterogeneous local dynamics observed in time series data.
Notwithstanding this essential feature of the discretization procedure, we can still modified
the construction of the transition matrix so that {Zt } perfectly replicates the unconditional
mean, variance and autocorrelation in together with conditional mean and variance of Xt
on each regime. To this end, we only need to modify the transition matrix so that Qkℓ =
( )
pkℓ ΛN ϕ̄(k, ℓ) for 1 ≤ k, ℓ ≤ N , where
cov(Xt , Xt+1 |St+1 , St ) + [Eν (Xt+1 |St+1 , St ) − Eν (Xt+1 |St+1 )]Eν (Xt |St )
ϕ̄(St+1 , St ) = √ , (7)
var(Xt |St )var(Xt+1 |St+1 )
with St = k, St+1 = ℓ. With Z unchanged and the binomial distribution ξ remaining the
conditional distribution of Zt , it follows that both the conditional and unconditional mean
and variance of Zt stay the same, hence equal to those of Xt . An analogous derivation as
in (6) shows that Eν Xt+1 Xt = Eη Zt+1 Zt , thus the unconditional autocorrelations of Xt and
Zt are the same. However, the drawback of (7) is that ϕ̄(St+1 , St ) needs not to be less than
unity in absolute value, which renders the possibility of negative entries in the transition
matrix based on ϕ̄(St+1 , St ). As a result, matching the unconditional autocorrelation by such
a method may not be feasible. This is an important caveat to keep in mind in any practical
exercise.
IV Numerical Examples
References
Adda, J., and R. Cooper (2003): Dynamic Economics: Quantitative Methods and Applications.
The MIT Press, Cambridge. [2]
Bai, Y., and J. Zhang (2010): “Solving the Feldstein-Horioka Puzzle with Financial Frictions,”
Econometrica, 78(2), 603–632. [2]
Bougerol, P., and N. Picard (1992): “Strict Stationarity of Generalized Autoregressive Pro-
cesses,” Annals of Probability, 20(4), 1714–1730. [4]
Brandt, A. (1986): “The Stochastic Equation Yn+1 = An Yn + Bn with Stationary Coefficients,”
Advances in Applied Probability, 18(1), 211–220. [4]
Deaton, A. (1991): “Saving and Liquidity Constraints,” Econometrica, 59(5), 1221–1248. [2]
Farmer, L. E., and A. A. Toda (2017): “Discretizing Nonlinear, Non-Gaussian Markov Processes
with Exact Conditional Moments,” Quantitative Economics, forthcoming. [2]
Flodén, M. (2008): “A Note on the Accuracy of Markov-Chain Approximations to Highly Persistent
AR(1) Processes,” Economics Letters, 99(3), 516–520. [2]
Francq, C., and J.-M. Zakoı̈an (2001): “Stationarity of Multivariate Markov-Switching ARMA
Models,” Journal of Econometrics, 102(2), 339–364. [5]
11
Samples of (φX , φZ ) Samples of (χ min , χ max )
1 2
χ max
φZ
0.5 1.5
0 1
0 0.5 1 0.4 0.6 0.8 1
φX χ min
Histogram of bias Bias conditional on φX
10000 1
φZ /φX
5000 0.95
Mean
5%-quantile
95%-quantile
0 0.9
0.9 0.95 1 0 0.5 1
φZ /φX φX
12
Galindev, R., and D. Lkhagvasuren (2010): “Discretization of Highly Persistent Correlated
AR(1) Shocks,” Journal of Economic Dynamics and Control, 34(7), 1260–1276. [2]
Gospodinov, N., and D. Lkhagvasuren (2014): “A Moment-Matching Method For Approximat-
ing Vector Autoregressive Processes By Finite-State Markov Chains,” Journal of Applied Econo-
metrics, 29(5), 843–859. [2]
Hamilton, J. D. (1989): “A New Approach to the Economic Analysis of Nonstationary Time Series
and the Business Cycle,” Econometrica, 57(2), 357–384. [2]
(1990): “Analysis of Time Series Subject to Changes in Regime,” Journal of Econometrics,
45(1-2), 39–70. [3]
(2015): “Macroeconomic Regimes and Regime Shifts,” in Handbook of Macroeconomics,
vol. 2. Elsevier. [2]
Kopecky, K. A., and R. M. H. Suen (2010): “Finite State Markov-Chain Approximations to
Highly Persistent Processes,” Review of Economic Dynamics, 13(3), 701–714. [2, 6, 7]
Liu, Y. (2015): “Estimation of Time Series Model with Markov Regime Switching: A GMM Ap-
proach,” Working paper, Wuhan University. [5]
Meyn, S., and R. L. Tweedie (2009): Markov Chains and Stochastic Stability. Cambridge Univer-
sity Press, New York, 2 edn. [3]
Rouwenhorst, K. G. (1995): “Asset Pricing Implications of Equilibrium Business Cycle Models,”
in Frontiers of Business Cycle Research, ed. by T. F. Cooley, chap. 10, pp. 294–330. Princeton
University Press, Princeton. [2, 6, 7]
Storesletten, K., C. I. Telmer, and A. Yaron (2004): “Consumption and Risk Sharing over
the Life Cycle,” Journal of Monetary Economics, 51(3), 609–633. [2]
Tanaka, K., and A. A. Toda (2013): “Discrete Approximations of Continuous Distributions by
Maximum Entropy,” Economics Letters, 118(3), 445–450. [2]
(2015): “Discretizing Distributions with Exact Moments: Error Estimate and Convergence
Analysis,” SIAM Journal on Numerical Analysis, 53(5), 2158–2177. [2]
Tauchen, G. (1986): “Finite State Markov-chain Approximations to Univariate and Vector Autore-
gressions,” Economics Letters, 20(2), 177–181. [2, 6]
Tauchen, G., and R. Hussey (1991): “Quadrature-Based Methods for Obtaining Approximate
Solutions to Nonlinear Asset Pricing Models,” Econometrica, 59(2), 371–396. [2]
Timmermann, A. (2000): “Moments of Markov Switching Models,” Journal of Econometrics, 96(1),
75–111. [4]
Yang, M. (2000): “Some Properties of Vector Autoregressive Processes with Markov-Switching Co-
efficients,” Econometric Theory, 16(1), 23–43. [4]
Yao, J., and J.-G. Attali (2000): “On Stability of Nonlinear AR Processes with Markov Switching,”
Advances in Applied Probability, 32(2), 394–407. [4]
Zhang, J., and R. A. Stine (2001): “Autocovariance Structure of Markov Regime Switching Models
and Model Selection,” Journal of Time Series Analysis, 22(1), 107–124. [5]
13