0% found this document useful (0 votes)
5 views

Lecture 2

Uploaded by

dhoang6679
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Lecture 2

Uploaded by

dhoang6679
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Advanced Econometrics III

Lecture 2 ARMA Processes

Katerina Petrova
Lag operators

De…nition
The lag operator is de…ned as Lyt = yt 1, so that for any K 2 N

LK yt = LK 1
yt 1 = LK 2
yt 2 = ... = yt K.

De…nition
The di¤erence operator is de…ned as ∆yt = (1 L) yt = yt yt 1
and

∆k yt = (1 L)k yt
∆k yt = 1 Lk yt = yt yt k.
Lag polynomials

De…nition
We de…ne a lag polynomial of order p as

α (L) = α0 + α1 L + ... + αp Lp .

When we apply α (L) to time series yt , we have α (L) yt =


α0 yt + α1 Lyt + ... + αp Lp yt = α0 yt + α1 yt 1 + ... + αp yt p .
Autoregressive Moving Average (ARMA) Processes

De…nition
An Autoregressive Moving Average model of order p, q, (denoted
by Xt ARMA(p, q)) is de…ned recursively as

Xt = µ + ∑pi=1 φi Xt i + ∑qj=0 θ j εt j ,
= µ + φ1 Xt 1 + ... + φp Xt p + εt + θ 1 εt 1 + ... + θ q εt q

where εt are WN (0, σ2 ); or equivalently (and more compactly)

φ (L) Xt = µ + θ (L) εt

where φ (L) = 1 φ1 L ... φp Lp and


θ (L) = 1 + θ 1 L + ... + θ q Lq are the autoregressive and moving
average lag polynomials of order p and q respectively.
Moving Average (MA) Processes

De…nition
A Moving Average model of order q, (denoted by Xt MA(q)) is
just a special case of ARMA(0, q ):

Xt = µ + ∑qj=0 θ j εt j = µ + εt + θ 1 εt 1 + ... + θ q εt q

where εt are WN (0, σ2 ); or equivalently (and more compactly)

Xt = µ + θ (L) εt

with θ (L) = 1 + θ 1 L + ... + θ q Lq moving average lag polynomial


of order q.
Autoregressive (AR) Processes

De…nition
An autoregressive model of order p, (denoted by Xt AR(p)) is
another special case of ARMA(p, 0):

Xt = µ + ∑pi=1 φi Xt i + εt
= µ + φ1 Xt 1 + ... + φp Xt p + εt

where εt are WN (0, σ2 ); or equivalently (and more compactly)

φ (L) Xt = µ + εt

with φ (L) = 1 φ1 L + ... φp Lp autoregressive lag polynomial of


order p.
Example (MA(1))
Let Xt MA(1): Xt = µ + εt + θεt 1. Then

E[Xt ] = E[µ + εt + θεt 1]



γ0t = E[Xt E(Xt )] = E[εt + θεt 1 ]2
2

= (1 + θ 2 ) σ 2
γ1t = E[Xt E(Xt )][Xt 1 E(Xt 1 )]
= E[εt + θεt 1 ][εt 1 + θεt 2 ] = θσ2
for j > 1, γjt = E[Xt E(Xt )][Xt j E(Xt j )]
= E[εt + θεt 1 ][εt j + θεt j 1 ] = 0

Thus an MA(1) process is always covariance stationary, regardless


of the values of θ.
Example (MA(q))
Let Xt = µ + εt + θ 1 εt 1 + ... + θ q εt q. Then,

E[Xt ] = E[µ + εt + θ 1 εt 1 + ... + θ q εt q] =µ


2
γ0t = E[Xt E(Xt )]2 = E [εt + θ 1 εt 1 + ... + θ q εt q]

= σ2 ∑qk =0 θ 2k since εt0 s are uncorrelated.


for j = 1, ..., q, γjt = E[Xt E(Xt )][Xt j E(Xt j )] =
= E [ εt + θ 1 εt 1 + ... + θ q εtq ] [ εt j + θ 1 εt j 1 + ... + θ q εt j q ]

=E θ j θ 0 ε2t j + θ j +1 θ 1 εt j 1 + θ j +2 θ 2 ε2t j 2 + ... + θ q θ q j ε2t q


2

= σ2 ∑qk =j0 θ k θ k +j

where θ 0 = 1. For j > q, since the εt0 s are uncorrelated, γj = 0.


So, for any (…nite) values of θ 1 , θ 2 , ..., θ q , the process is stationary.
As an exercise, compute γjt for j = q, ..., 1.
In…nite MA Processes

De…nition
An MA(∞) process Xt is de…ned as the (L2 or a.s.) limit as
n ! ∞ of Xn,t = µ + ∑nj=0 θ j εt j with εt WN (0, σ2 ):

Xt = µ + ∑j =0 θ j εt j

whenever the in…nite series converges (in L2 or a.s.)

De…nition
Xt is a linear process if it can be written as

Xt = µ + ∑j∞=0 cj εt j , εt are WN (0, σ2 ) (1)

A linear process is just an MA (∞) process.


Linear Processes

Theorem (Wold Representation Theorem)


Any stochastic covariance stationary process Xt admits a linear
representation with square summable coe¢ cients: ∑j∞=0 cj2 < ∞.

It is therefore crucial to study the properties of MA(∞)


process.
First, in order to write the in…nite sum down, we should
establish under what conditions it exists (i.e. it is …nite).
We will also prove that as long as the coe¢ cients cj are square
summable, i.e. ∑j∞=0 cj2 , the process is covariance stationary.
We will also prove next time the Wold representation theorem
for ARMA processes.
First, we show that given ∑j∞=0 cj2 < ∞, the MA(∞) representation
generates L2 convergent r.v. (note that when ∑j∞=0 jcj j < ∞, we
can show a.s. convergence).
Proof.
Let Xn,t = ∑nj=0 cj εt j . For each n, we have shown Xn,t 2 L2 and
we wish to show that Xn,t converges in L2 .
By completeness of (L2 , k.kL2 ), it su¢ ce to show that Xn,t
satis…es the Cauchy criterion for L2 convergence: for any ε > 0
9N 2 N, such that for 8M > N,

kXM ,t XN ,t k2L2 < ε.


Proof.
Let ε > 0 be given. We have that
h i2
kXM ,t XN ,t k2L2 = E ∑M
j = 0 cj ε t j ∑N
j = 0 cj ε t j

= σ 2 ∑M 2
j = N + 1 cj = σ
2
∑M 2
j = 0 cj ∑N 2
j = 0 cj

n o
j = 0 cj : N 2 N
CN := ∑N
Since the sequence 2 converges, it is
Cauchy in R, so that ∑M 2
j = 0 cj ∑N 2
j =0 cj can be made arbitrarily
small for all M > N and a suitable choice of N.
We conclude that 9N 2 N s.t. 8M > N, kXM ,t XN ,t k2L2 can be
made smaller than ε. Therefore the MA(∞) process Xn,t converges
in mean square (by completeness of L2 ) as n ! ∞.
Proof.
It remains to show that the limit Xt is in L2 and to check if it is
covariance stationary.
To show that square summability of the coe¢ cients implies
covariance stationarity of the MA(∞) process (and Xt is in L2 ), we
can compute the mean and covariances from the MA(q) process,
letting q ! ∞.

E[Xt ]= µ
γjt = σ2 ∑k∞=0 ck ck +j ,

which doesn’t depend on t. When would the covariances be …nite?


By the Cauchy-Schwarz inequality,

γj γ0 = σ2 ∑j∞=0 cj2 < ∞

since ∑j∞=0 cj2 < ∞.


Square and Absolute Summability

Consider the linear process Xt = ∑j∞=0 cj εt j , εt are WN


(0, σ2 )
De…nition
For any sequence (cj )j 0 , we say that (cj )j 0 are absolutely
summable if ∑j∞=0 jcj j < ∞.

De…nition
For any sequence (cj )j 0, we say that (cj )j 0 are square summable
if ∑j∞=0 cj2 < ∞.

We will often require absolute summability of the coe¢ cients.


Square and Absolute Summability

Theorem
For any sequence (cj )j 0 , absolute summability ∑j∞=0 jcj j < ∞
implies square summability ∑j∞=0 cj2 < ∞ but the converse does not
hold.

Proof.
First we show that ∑j∞=0 jcj j < ∞ implies that jcj j ! 0 as j ! ∞.
To see this, if jcj j 6! 0, there exists ε > 0 such that jcj j ε for
in…nitely many j, say for all j 2 J, where J is an in…nite set:
jJ j = ∞. Then

∑j∞=0 jcj j ∑ j 2 J j cj j ε ∑j 2J 1 = ε jJ j = ∞.
Square and Absolute Summability

Proof.
Now ∑j∞=0 jcj j < ∞ =) jcj j ! 0 as j ! ∞ =) jcj j < 1 for all
but …nitely many j =) cj2 < jcj j for all but …nitely many j.
n o
Denote by JF = j : cj2 jcj j and by JFc its complement. Note
that jcj j ! 0 implies that JF is …nite set and JFc is in…nite. Then

∑j∞=0 cj2 = ∑j 2JF cj2 + ∑j 2JFc cj2 ∑j 2JF cj2 + ∑j 2JFc jcj j
∑j 2JF cj2 + ∑j∞=0 jcj j < ∞

since ∑j∞=0 jcj j < ∞ and ∑j 2JF cj2 is a sum of …nitely many terms
(JF many).
Square and Absolute Summability

Proof.
We show the converse does not hold with a counterexample. Let
cj = j 1 . Then ∑j∞=1 cj2 = π 2 /6 whereas

1 RN1
∑N N
j = 1 j cj j = ∑ j = 1 1 t
dt (N ! ∞)
j
= log N ! ∞

so ∑j∞=1 jcj j = ∞.
Example (AR(1))
Let Xt be an AR(1) process: Xt = c + ρXt 1 + εt

= c + εt + ρ(c + εt 1 ) + ρ2 Xt 2
= c + εt + ρ (c + εt 1 ) + ρ2 (c + εt 2) + ρ
3
Xt 3
= ... = c ∑j∞=0 ρj + ∑j∞=0 ρj εt j (2)

which is an MA(∞) process with θ j = ρj . We know that the


condition under which MA(∞) is covariance stationary (given an
appropriate initial condition X0 ) is square summability
∑j∞=0 θ j = ∑j∞=0 ρ2j < ∞ , jρj < 1, and c ∑j∞=0 ρj = c/ (1 ρ) .
2

c c
E[Xt ] = + E[εt ] + ρE[εt 1] + ρ
2
E[ εt 2 ] + ... =
1 ρ 1 ρ
γ0 = E[Xt E(Xt )] = E[εt + ρεt
2
+ ρ εt 2 + ...] 1
2 2

= σ (1 + ρ + ρ + ρ + ...) = σ /(1 ρ2 )
2 2 4 6 2
Example (AR(1))

for j 1, γj = E[Xt E(Xt )][Xt j E(Xt j )]

= E[εt + ρεt 1 + ρ εt 2
2
j
+ ... + ρ εt j +ρ j +1
εt j 1 + ρ j +2 ε t j 2 + ...]
2
[ εt j + ρεt j 1 + ρ εt j 2 + ...]
2 j j +2 j +4 2 j
= σ (ρ + ρ +ρ + ...) = σ ρ /(1 ρ2 )

The autocorrelation
rj = γj /γ0 = ρj .
How would the autocorrelation look like as a function of the lags j?
AR (1) with lag operator

Example (AR (1))


We can write the AR(1) process as

εt = (1 ρL)Xt , jρj < 1,

so that
1
Xt = (1 ρL) εt .
Recall that (1 r ) 1 , as long as jr j < 1, can be written as
geometric series 1 + r + r 2 + ... = ∑j∞=0 r j . Therefore

Xt = (1 ρL) 1
εt = ∑j∞=0 ρj Lj εt = ∑j∞=0 ρj εt j ,

so that Xt is a MA(∞) process with coe¢ cients cj = ρj .

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy