0% found this document useful (0 votes)
43 views6 pages

MIT14 384F13 Lec11 PDF

This document discusses methods for summarizing vector autoregressive (VAR) models. It introduces the companion form, which represents a VAR(p) model as a VAR(1) model to facilitate calculations. It also discusses estimating impulse response functions from VAR models using linear algebra formulas. Bootstrap methods are recommended over asymptotic methods for constructing confidence intervals for impulse responses due to the nonlinear transformations involved. A bootstrap-after-bootstrap method is also introduced to further improve the accuracy of confidence intervals.

Uploaded by

Ricardo Caffe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views6 pages

MIT14 384F13 Lec11 PDF

This document discusses methods for summarizing vector autoregressive (VAR) models. It introduces the companion form, which represents a VAR(p) model as a VAR(1) model to facilitate calculations. It also discusses estimating impulse response functions from VAR models using linear algebra formulas. Bootstrap methods are recommended over asymptotic methods for constructing confidence intervals for impulse responses due to the nonlinear transformations involved. A bootstrap-after-bootstrap method is also introduced to further improve the accuracy of confidence intervals.

Uploaded by

Ricardo Caffe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Notation and some Linear Algebra

14.384 Time Series Analysis, Fall 2007


Professor Anna Mikusheva
Paul Schrimpf, scribe
September 27, 2007
revised October 5, 2009

Lecture 11

VARs
Notation and some Linear Algebra
Let
yt =

p


aj ytj + et

(1)

j=1

where yt and et are k 1, and aj is k k. et is white noise with Eet et = and Eet es = 0




p
p
Lemma 1. yt is stationary if det Ik j =1 aj z j = 0 for all |z | 1, i.e. all roots of det Ik j=1 aj z j
are outside the unit circle.
Denition 2. Companion form of (1) is:
Yt = AYt1 + Et ,
where

Yt =

yt

yt1
...
ytp+1

,A=

a1
I
0
..
.

a2
0
I
..
.

...
...
0

ap
0
...
..
.

et

, Et = ..

so Yt and Et are kp 1 and A is kp kp.


So, any VAR(p) can be wrote as a multi-dimensional VAR(1). From a companion form one can note that
Y = AY A + E
This may help to calculate variance-covariance structure of VAR. In particular, we may use the following
formula from linear algebra:
vec(ABC) = (C  A)vec(B),
here stays for tensor
product and
vec(A) transforms a matrix to a vector according to the following
a11 a12

semantic rule: if A = a21 a22 , then vec(A) = [a11 , a12 , a21 , a22 , a31 , a32 ]
a31 a32
vec(Y ) =vec (AY A + E )
=(A A)vec(Y ) + vec(E )

vec(Y ) = (I (A A))

vec(E )

Cite as: Anna Mikusheva, course materials for 14.384 Time Series Analysis, Fall 2007. MIT OpenCourseWare (http://ocw.mit.edu),
Massachusetts Institute of Technology. Downloaded on [DD Month YYYY].

Estimation

Estimation
=
Lemma 3. MLE (with normal error assumption)= OLS equation-by-equation with

1
T

et et

Intuition: all variables are included in all equations, so there is nothing gained by doing SUR. This also
implies that OLS equation by equation is asymptotically ecient. The usual statements of consistency and
asymptotic normality hold, as well as OLS formulas for standard errors.

Granger Causality
Granger Causality is a misleading name. It would be better called Granger predictability.
Denition 4. y fails to Granger cause x if its not helpful in linear predicting x (in MSE sense). More
formally,




(xt+s |xt , ..., yt , yt1 , ...) , s > 0
(xt+s |xt , xt1 , ...) = MSE E
MSE E
(xt |z) denotes the best linear prediction of xt given z
where E
A test of Granger causality is to run OLS:
xt = 1 xt1 + ... + p xtp + 1 yt1 + ... + p ytp + et
and test H0 : 1 = 2 = ... = 0.
Note that:
Granger causality is not related to economic causality, its more about predictability.
There could be simultaneous casualty or omitted variable problems. For example, there may be a
variable z that causes both y and x but with the dierent lag(sluggish response). If one does not
include z (omitted variable), it may look like x causes y.
Forward looking (rational) expectations may even reverse the causality. For example, suppose analysts
rationally predict that a stock is going to pay high dividends tomorrow. That will provoke people
to buy the stock today, and the price will rise. In the data you would observe that the price rise is
followed by high dividends. So, we would nd that prices Granger cause dividends, even though it was
really that anticipated high dividends caused high prices. Or increase in orange juice price Granger
causes bad weather in Florida.
How to do Granger causality in multivariate case?
Assume y1t is k1 1 vector and y2t is k2 1. Assume that we have VAR system

 
 

y1t
A1 (L)y1t1 + A2 y2t1
e1t
=
+
y2t
B1 (L)y1t1 + B2 y2t1
e2t
Group of variables y2 fails to Granger cause y1 if A2 = 0. To perform this test we have to run unrestricted
regression y1t = A1 (L)y1t1 +A2 y2t1 +eut and restricted regression y1t = A1 (L)y1t1 +ert . Then we estimate
T
T


the corresponding variance-covariance matrix u = T1 t=1 etu etu and r = T1 t=1 ert ert . The test statistic
compares these matrices:
LR = T (log |r | log |r |)
Under the null (absence of Granger Causality) LR statistic is asymptotically 2 with the degrees of freedom
equal to the number of restrictions imposed.

Cite as: Anna Mikusheva, course materials for 14.384 Time Series Analysis, Fall 2007. MIT OpenCourseWare (http://ocw.mit.edu),
Massachusetts Institute of Technology. Downloaded on [DD Month YYYY].

Reporting Results

Figure 1: Impulse-response function

Impulse Response

2.5

2 orting Results
Rep

Amplitude

Reporting the matrix of coecients is not very informative. There are too many of them, and the coecients 1.5
are dicult to interprete anyway. Instead, people present impulse-response functions and variance
decompositions.
1
Impulse-resp
onse

Suppose
yt = a(L)yt1 + et

0.5

with MA representation
0
0

10

yt = c(L)et
Samples
Eu
u = I.

and Dut = et such that ut are orthonormal, i.e.

t t

15

20

25

Let c(L) = c(L)D, so

yt = c(L)ut
Denition 5. The impulse-response function is

yti
.
k
utj

It is the change in yti in response to a unit change

in uktj holding all other shocks constant. We can plot the impulse-response function as in gure 1.
To estimate an impulse-response, we would
1. Estimate VAR by OLS a

2. Invert to MA
3. Find and apply rotation D to get orthonormal shocks the impulse response is given by c

Cite as: Anna Mikusheva, course materials for 14.384 Time Series Analysis, Fall 2007. MIT OpenCourseWare (http://ocw.mit.edu),
Massachusetts Institute of Technology. Downloaded on [DD Month YYYY].

Impulse-response

Standard Errors
Delta-method To calculate standard errors, we can apply the delta-method to a
the c are just some
complicated function of a
. In practice, we can do this recursively:
yt =a1 yt1 + ... + ap ytp + et
=a1 (a1 yt2 + ... + ap ytp1 + et1 ) + a2 yt2 + ... + ytp + et
..
.
so, c1 = a1 , c2 = a2 + a21 , etc. We can apply the delta-method to each of these coecients. Wed also need to
apply the delta-method to our estimate of D. Sometimes, this is done in practice. However, it is not really
the best way, for two reasons:
We estimate many aj from not all that big of a sample, so our asymptotics may not be very good.
This is made even worse by the fact that ck are highly non-linear transformations of aj
Instead of the delta-method, we can use the bootstrap.
Bootstrap

The typical construction of bootstrap condence sets would be the following:

1. run regression yt = c + a1 yt1 + ... + ap ytp + et to get c,



a1 , ..., 
ap and residuals et
2. Invert the estimated AR process to get the estimates of impulse response cj from 
a1 , ..., 
ap
3. For b = 1..B

=
c+
a1 yt1,b + ... + 
ap ytp,b + et,b , where et,b is sampled randomly with replacement
(a) Form yt,b
from {et }

= c + a1 yt1,b + ... + ap ytp,b + et to get 


a1,b , ..., 
ap,b
(b) Run regression yt,b
(c) Invert the estimated AR process to get the estimates of impulse response cj,b from 
a1,b , ..., 
ap,b

4. sort cj,b in ascending order cj,(1) ... cj,(B)


5. Form condence interval.
There are at least three ways of forming a condence set:
1. Efrons interval (percentile bootstrap): uses 
cj as a test statistics. The interval is [cj,([B/2]) , cj,([B(1/2)]) ]
2. Halls interval (turned around bootstrap) : uses 
cj cj as a test statistics. It employs the idea of
bias correction. The interval is a solution to inequalities
cj 
cj cj cj,([B(1/2)]) 
cj
cj,([B/2]) 
or [2
cj cj,([B(1/2)]) , 2
cj cj,([B/2]) ]
3. studentized bootstrap : uses t-statistics statistictj =
tj,([B/2])


cj cj
s.e.(
cj )

. The interval is a solution to inequalities

cj c j

tj,([B(1/2)])
s.e.(cj )

or [
cj tj,([B(1/2)]) s.e.(
cj ), 
cj tj,([B/2]) s.e.(
cj )]
Remark 6. The bootstrap is still an asymptotic procedure. One advantage of the bootstrap is its simplicity.
There is no need to apply the delta-method.
Remark 7. There are variations of the bootstrap that also work. For example, you could sample the errors
This would be called a parametric bootstrap because wed be
from a normal distribution with variance .
relying on a parametric assumption to create our simulated samples.

Cite as: Anna Mikusheva, course materials for 14.384 Time Series Analysis, Fall 2007. MIT OpenCourseWare (http://ocw.mit.edu),
Massachusetts Institute of Technology. Downloaded on [DD Month YYYY].

Impulse-response

Bootstrap-after-bootstrap Simulations show that bootstrap works for impulse responses somewhat better than asymptotic (delta-method). This is due to nal sample correction - remember that the dependence
between {aj } and {cj } is non-linear. However, the coverage of these intervals is still very far from ideal,
especially for very persistent processes. The main reason for that is aj are very biased estimates of aj . To
correct this a bootstrap-after-bootstrap was suggested.

a1 , ..., 
ap and residuals et
1. run regression yt = c + a1 yt1 + ... + ap ytp + et to get c,
2. Invert the estimated AR process to get the estimates of impulse response cj from 
a1 , ..., 
ap
3. For b = 1..B

(a) Form yt,b


=
c+
a1 yt1,b + ... + 
ap ytp,b + et,b , where et,b is sampled randomly with replacement
from {et }

(b) Run regression yt,b


= c + a1 yt1,b + ... + ap ytp,b + et to get 
a1,b , ..., 
ap,b

4. calculate bias corrected estimates of aj : 


aj = 2
aj

1
B

B

aj,b
b=1 

5. For b = 1..B

(a) Form yt,b


=
a1 yt1,b + ... + 
ap ytp,b + et,b , where et,b is sampled randomly with replacement from
{et }

(b) Run regression yt,b


= c + a1 yt1,b + ... + ap ytp,b + et to get 
a1,b , ..., 
ap,b

(c) Invert the estimated AR process to get the estimates of impulse response cj,b from 
a1,b , ..., 
ap,b
6. Form condence interval.

Cite as: Anna Mikusheva, course materials for 14.384 Time Series Analysis, Fall 2007. MIT OpenCourseWare (http://ocw.mit.edu),
Massachusetts Institute of Technology. Downloaded on [DD Month YYYY].

MIT OpenCourseWare
http://ocw.mit.edu

14.384 Time Series Analysis


Fall 2013

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy