Chapter 2 - Autocorrelation

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 16

Chapter 2- Autocorrelation

THE MEANING OF THE ASSUMPTION OF SERIAL INDEPENDENCE


The fourth assumption of OLS is that the successive values of the random variable u
are temporally independent, that is, that the value which u assumes in any period is
independent from the value which it assumed in any previous period.
This assumption implies that the covariance of ui and u j is equal to zero

cov ( ui u j ) =E {[ ui−E ( ui ) ][ u j−E ( u j ) ] }

¿ E ( ui u j ) =0 ( for i≠ j )

giventhat by Assumption 2 E ( u i )=E ( u j )=0.


If this assumption is not satisfied, that is, if the value u in any particular period is
correlated with its own preceding value (or values) we say that there is
autocorrelation or serial correlation of the random variable.
Autocorrelation refers to the relationship, not between two (or more) different
variables, but between the successive values of the same variable.
Most of the standard econometric textbooks deal with the simple case of linear
relationship between any two successive values of u
ut =ρ ut −1 + v t
This is known as a first-order autoregressive relationship. We will deal with the
simple autocorrelation coefficient ρu u . The solution will be subjected to all the
t t−1

criticisms of the simple correlation coefficient treated earlier.


We may obtain a rough idea of the existence or absence of autocorrelation in the
u ’ s by plotting the values of the regression residuals, e ’ s , on a two dimensional
diagram. The e ' s are estimates of the true values of u; thus if the e ' s are correlated
this suggests autocorrelation of the true u ' s . Drawing the scatter diagram of the e ' s
we should bear in mind of the following:
I. The ‘variables’ whose correlation we attempt to detect in this case are e t and
e t−1
Variable I: e t Variable II: e t−1
e t +1∨(e2 ) e t ∨(e 1)
e t +2 (e 3) e t +1 (e 2)
e t +3 (e 4 ) e t +2 (e 3)

e t +(n−1) (en−1 ) e t +(n−2) (en−2 )


e t +n (e n) e t +(n−1) (en−1 )
The observable points to be plotted are e t e t−1, (e 1 ,e 2), (e 2 ,e 3), (e 3 , e4 ) …. (e n , en−1). It is
clear that if most of the points ( e t , e t−1) fall in quadrants I and III, the autocorrelation
will be positive, since the products of e t ∧e t−1 are positive. If most of the points (
e t , e t−1) fall in quadrants II and IV, the autocorrelation will be negative, because the
products of e t ∧e t−1 are negative.
Another method commonly used in applied econometric research for the detection
of autocorrelation is to plot the regression residuals, e ’ s , against time. If the e’s in
successive periods show a regular time pattern, we conclude that there is
autocorrelation in the function. In general if the successive values of the e ’ s change
sign frequently, autocorrelation is negative. If the e ’ s do not change sigh frequently
so that several positive e ’ s are followed by several negative values of e ’ s ,
autocorrelation is positive.
A measure of the first-order autocorrelation is provided by the autocorrelation
coefficient

re e =
∑ et et −1 = ^ρ
√∑ e 2t √∑ e2t −1
t t−1 uu t t −1

re e
t t−1
is an estimate of the true autocorrelation coefficient ρu u t t−1
which measures the
correlation of the true population of u ’ s.
If the value of u in any particular period depends on its own value in the preceding
period alone, we say that the u ’ s follow a first-order autoregressive scheme (or
first-order Markov process). The relationship between u ’ s is then of the form

ut =f (ut −1 )
If u depends on the values of the two previous periods, that is ut =f (ut −1 , ut −2 ), the
form of autocorrelation is called a second-order autoregressive scheme, and so
on.
In most applied research it is assumed that, when autocorrelation is present, it is of
the simple first-order formut =f (ut −1 ) and more particularly

ut =a 1 ut −1+ v t
where a 1=the coefficient of the autocorrelation relationship
v=a random variable satisfying all theusual assumptions .

E ( v )=0 , E ( v 2 ) =σ 2v , E ( v i v j )=0 (for i ≠ j)


Clearly this is the simplest possible form of autocorrelation: a linear relationship
between ui and u j (with suppressed constant intercept). If we apply OLS to this
relationship we obtain
n

∑ u t ut−1
a 1= t =2n
∑ u2t −1
t=2

On the other hand the autocorrelation coefficient ρ is given by the formula

ρu u =
∑ ut ut −1
t t−1

√∑ u2t √∑ u2t −1
Given that for large samples ∑ u 2t ≈ ∑ u2t −1, we may write
ρ≈
∑ ut ut −1
√¿¿¿¿
Clearly ρ ≈ a^ 1 for large samples. This is why in most textbooks the simple first-order
autoregressive model is given in the form
ut =ρ ut −1 + v t
w here ρis the first-order autocorrelation coefficient. Clearly if ρ=0 ,u t=v t that isu t is not
autocorrelated.

SOURCES OF AUTOCORRELATION
Autocorrelated values of the disturbance term u may be observed for many reasons.
i. Omitted explanatory variables.
ii. Misspecification of the mathematical form of the model.
iii. Interpolations of the statistical observations.
iv. Misspecification of the true random term u.
It should be noted that the source of autocorrelation has a strong bearing on the
solution which must be adopted for the correction of the incidence of serial
correlation.

PLAUSIBILITY OF THE ASSUMPTION OF NON-AUTOCORRELATION


From past discussions, it should be obvious that the assumption of temporal
independence of the values ofu can be easily violated in practice. Thus:
a. Taking into account that in most applied econometric research only the most
important (three or four) explanatory variables are included explicitly in the
function; it is natural that omitted variables are a frequent cause of ‘quasi-
autocorrelation’. In particular, if we use time series it is almost certain that
some at least of these omitted variables will be serially correlated, since in
economic life it is usual for the value of any variable in one particular period to
be partly determined by its own value in the preceding period (or periods). For
example, output in period t depends on output in periodt−1, etc.
b. Interpretation and, in general, the customary data-collecting and processing
techniques impart serial correlation in many aggressive time series.
c. Random factors tend to persist in several time periods.

THE FIRST ORDER AUTOREGRESSIVE SCHEME


We shall first establish the mean, variance and covariance of u when its values are
correlated with the simple Markov process. In this case the autoregressive structure
is
ut =ρ ut −1 + v t with| ρ|<1
where ρ=the coefficient of the autocorrelation relationship
v t=a random variable satisfying all the usual assumptions .

E ( v )=0 , E ( v 2 ) =σ 2v E ( v i v j )=0 (for i≠ j)


The value of the error term when it is autocorrelated with a first-order
autoregressive scheme is

ut =∑ ρr v t−r
r =0

We will next establish the mean, variance and covariance of this autocorrelated
disturbance variable.
I. The mean of the autocorrelated u ’ s

E(u t )=E ¿
But by the assumptions of the distribution of v we have

E ( v t−r ) =0.

Therefore E ( ut )=0 (t=1 ,2 , … , n)

II. Variance of the autocorrelated u ’ s


By definition of the variance we have

2
E(u 2t )=E { ∑ ρr v t−r }
r =0

¿ ∑ ( ρ ) E ( v t −r )
r 2 2

¿ ∑ ( ρ r ) var ( v t−r )
2

¿ ∑ ρ σ v =σ v [1+ ρ + ρ + ρ +… ]
2 2 2 2 4 6
The expression in brackets is the sum of a geometric progression of infinite
terms, whose first term is unity and common ratio is ρ2 . Since|ρ|<1, then
taking the sum of the geometric progression we have

E ( u 2t ) =σ 2v
[ ]
1
1−ρ
2
(for| ρ|<1)

III. The covariance of the autocorrelated u ’ s


2 3
Given that ut =v t + ρ v t −1+ ρ v t−2 + ρ v t −3 + … …
2 3
¿ ut −1=v t −1+ ρ v t−2 + ρ v t −3+ ρ v t−4 + … …
We obtain:
cov ( ut u t−1 )=E [ u t−E ( u t ) ][ ut −1−E ( ut −1 ) ]
2
¿ E {[v t + ρ v t−1 + ρ v t −2 + …][v t −1+ ρ v t−2 +…]
¿ E {[ v t + ρ ( v t−1 + ρ v t −2+ … ) ] [ v t−1+ ρ v t −2 +… ] }
2
¿ E ¿( v t−1 + ρ v t −2 + ρ 2 v t −3 +… ¿+ E [ρ { v t−1 + ρ v t −2+ … } ]
2
¿ 0+ ρE { v t−1 + ρ v t −2+ … }
¿ ρE ¿

¿ ρ ( σ 2v + ρ2 σ 2v + ρ 4 σ 2v +…+ 0 )

1
¿ ρ ( σ v [ 1+ ρ + ρ + ρ + … ] )=ρ σ v
2 2 4 6 2
2
( for|ρ|< 1 )
1−ρ
2
¿ ρσv

Similarly, cov (u t ut −2 )=E( ut ut −2 )=ρ2 σ 2v and in general

s 2
cov ( ut u t−s ) =ρ σ v (for s ≠ t).

CONSEQUENCES OF AUTOCORRELATION
When the disturbance term exhibit serial correlation the value as well as the
standard errors of the parameter estimates are affected. In particular:
I. The estimates of the parameters do not have the statistical bias. In other
words, even when the residuals are serially correlated the parameter
estimates of OLS are statistically unbiased, in the sense that their expected
value is equal to the true parameters.
II. With auto-correlated values of the disturbance term the OLS variation of the
parameter estimates are likely to be larger than those of other econometric
methods.
III. The variance of the random term u may be seriously underestimated if the
u ’ s are auto-correlated. In particular, the underestimation of the variance of
u will be more serious in the case of positive auto-correlation of the error
term (ui ) and of positively auto-correlated values of X (in successive time
periods).
Note that if X is not auto-correlated, but is approximately random, then, even
if u is auto-correlated, the bias in var (u) and var ( b^ i ) is not likely to be serious.
IV. Finally, if the values of u are auto-correlated, the prediction based on OLS
estimates will be inefficient, in the sense that they will have a larger variance
as compared with prediction based on estimates obtained from other
econometric techniques.

TEST FOR AUTOCORRELATION


We said that some rough idea of the existence and the pattern of autocorrelation
may be gained by plotting the regression residuals either against their own lagged
value(s), or against time.
The Von Neumann Ratio
n

∑ ( X t − X t−1 )2 /(n−1)
σ 2 t=2
=
s 2X ∑ ( X t− X )2 /n
This is the ratio of the variance of the first difference of any variable X over the
variance of X . This ratio is applicable for directly observed series and for variables
which are random, that is, variables whose successive values are not auto-
correlated. In the case of the u ’ s
n

σ 2 ∑ ( e t −e t−1 )2 /(n−1)
= t=2
2
sX ∑ (et −e )2 /n
For large samples (n>30 ) one might think that the von Neumann ratio could be
applied approximately (with e=0 by definition). This test is not applicable for testing
the autocorrelation of theu ’ s, especially if the sample is small (n<30 ¿ .
The Durbin-Watson Test
Durbin and Watson have suggested a test which is applicable to small samples.
However, the test is applicable only for the first order autoregressive scheme (
ut =ρ ut −1 + v t ). The test may be outline as follows:
The null hypothesis is
H 0 : ρ=0 ,that is ,the u ’ s are not auto−correlated with first order scheme .
Against an alternative hypothesis
H 0 : ρ ≠ 0 ,that is ,the u ’ s are auto−correlated with first order scheme .
To test the null hypothesis we use the Durbin-Watson statistic
n

∑ (e t−et −1)2
d= t −2 n
=¿
∑e 2
t
t =1

The values of d lie between 0 and 4, and that when d=2, then ρ=0 therefore testing
H 0 : ρ=0 is equivalent to test H 0 :d=2 .
Expanding the d statistics we obtain
n n

∑ (e t−et −1)2 ∑ (e 2t +e 2t−1−2 e t e t−1)


d= t −2 n
= t −2 n

∑ et 2
∑ e2t
t =1 t =1

n n n

∑ e2t +∑ e 2t−1−2 ∑ et e t −1
d= t −2 t−2
n
t −2

∑ e 2t
t=1

n n n
But for large samples the term ∑ et , ∑ e2t −1∧¿ ∑ e2t
2
are approximately equal.
t =2 t =2 t =1
Therefore we may write

2 ∑ e t −1 2 ∑ et et −1
2
d≈ −
∑e 2
t −1 ∑ e 2t −1
d ≈ 2(1−
∑ e t e t−1 )
∑ e2t −1
But
∑ e t et −1 = ^ρ
∑ e 2t −1
Therefore
d
d ≈ 2 (1−^ρ )∨ ^ρ=1−
2
From this expression it is obvious that the values of d lie between 0 and 4.

Firstly, if there is no autocorrelation ^ρ =0 ¿ d=2. Thus if from the sample data we find
¿
d ≈ 2 we accept that there is no autocorrelation in the function.
Secondly, if ^ρ =+ 1, d=0 and we have perfect positive autocorrelation. Therefore, if
¿
0< d <2 there is some degree of positive autocorrelation, which is stronger, the
closer d ¿ is to zero.
Thirdly, if ^ρ =−1 ,d =4 and we have perfect negative autocorrelation. Therefore, if
¿
2<d <4 there is some degree of negative autocorrelation, which is stronger, the
¿
higher the value ofd .
It should be clear that in the Durbin-Watson test the null hypothesis of zero
autocorrelation ( ρ=0 ¿ is carried out indirectly, by testing the equivalent hypothesis
d=2.
The next step is to use the sample residuals ( e t ’ s) and compute the empirical value
of Durbin-Watson statistic, d ¿. Finally, the empirical d ¿ must be computed with the
theoretical values of d ; that is, the values of d which define the critical region of the
test.
The problem with this test is that the exact distribution of d is not known. However,
Durbin-Watson have established upper d u and lower d L limits for the significance
levels of d which are appropriate to test the hypothesis of zero first-order
autocorrelation against the alternative hypothesis of positive first order
autocorrelation.
The test itself compares the empirical d ¿ value, calculated from regression residuals,
with the d L and d u in the Durbin –Watson tables, with their transforms ( 4−d L ¿ and
(4−d u). The comparison using d L and d u investigates the possibility of positive
autocorrelation, while comparison with ( 4−d l ¿ and(4−d u) investigates the possibility
of negative autocorrelation:
¿
I. If d < d L we reject the null hypothesis of no autocorrelation and accept that
there is positive autocorrelation of the first order.
¿
II. If d >(4−d l) we reject the null hypothesis of no autocorrelation and accept
that there is negative autocorrelation of the first order.
¿
III. If d u <d <( 4−d u ) we accept the null hypothesis of no autocorrelation.
¿ ¿
IV. If d L <d <d u or (4−d ¿¿ u)< d < ( 4−d L ) ¿ the test is inclusive.

Shortcomings of Durbin-Watson test:


1. The d statistic is not an appropriate measure of autocorrelation if among the
explanatory variables there are lagged values of the endogenous variable.
2. The range of values of d over which the Durbin-Watson test is inconclusive has
been a drawback to its application.
3. It is inappropriate for testing for higher order serial correlation or for other forms
of autocorrelation.
An Alternative Test for Autocorrelation
We first apply OLS to the sample observations and obtain the values of the
regression residuals, e t ’ s. We experiment with the various forms of autoregressive
structures, for example
e t =ρ et −1+ v t
2
¿ e t =ρ et −1+ v t
¿ e t =ρ1 et −1+ ρ 2 e t−2 + v t

¿ e t =ρ √ et −1 + v t
¿ so on .
Autocorrelation is judged in the light of the statistical significance of the ^ρ ( s ) and the
overall fit of the above regression. That is, we may carry out any one of the
standard tests of statistical significance for the estimates ( ^ρ' s) of the
^ρi
autocorrelation relationship (e.g. t= ) as well as F−test for the overall
s( ^ρ¿¿i) ¿
significance of the regression.

SOLUTIONS FOR THE CASE OF AUTOCORRELATION


The solution to be adapted to each particular case depends on the source of
autocorrelation. Thus if the source is omitted variables, the appropriate procedure is
to include these variables in the set of explanatory variables. Suppose we omitted
level of income in previous period t−1 while determining the consumption function
of the current periodt , quasi-autocorrelation will occur. The consumption function
may now be written as follows
C t=b0 +b1 X t + bt X t−1 +ut
If lagged income is omitted, it influence will be reflected in the random term ut . Since
the successive values of income are usually positively auto-correlated.
Autocorrelation would be eliminated in this case by introducing into the function
explicitly lagged income as an explanatory variable of current consumption.
Similarly, if the source of autocorrelation is the misspecification of the mathematical
form of the relationship the relevant approach is to change the initial form.
Only when the above sources of autocorrelation have been ruled out should we
accept that the true u ’s are temporally dependent. For this case the appropriate
procedure is the transformation of the original data so as to produce a model whose
random variable satisfies the assumptions of classical least squares, and
consequently the parameters can be optimally estimated with the method.
Once autocorrelation is detected, the appropriate corrective procedure is to obtain
an estimate of the ρ ' s and apply OLS to a set of transformed data. The
transformation of the original data depends on the pattern of the autoregressive
structure.

First Order Autoregressive Scheme


If autocorrelation is of the first order scheme ut =ρ ut −1 + v t the appropriate
transformation is to subtract from the original observations of each period the
product of ^ρ times the value of the variables in the previous period. Then we apply
OLS to the model
¿ ¿ ¿
Y t =b 0+ b1 X 1 t +…+ bk X kt +v t
where
¿ ¿
Y t =Y t −^ρ Y t−1 X jt = X jt − ^ρ X j (t −1)
( j=1 ,2 , … , k )
v t=u t−ρ ut −1
The following transformation is obtained as follows
Y t =b 0+ b1 X 1 t +b2 X 2 t + …+b k X kt +u t where ut =ρ ut −1 + v t
with v t staisfying all the assumptions of a random variable .
The relationship for the period t−1 is
Y t −1=b0 +b 1 X 1(t−1) +b 2 X 2(t−1) +…+ bk X k (t −1) +ut −1
Pre-multiplying this equation by ρ , we get
ρY t −1 =ρ b0 +b 1 ρ X 1 (t −1) +b2 ρ X 2(t −1 )+ …+b k ρX k (t −1) + ρut −1
Subtracting from the original relationship we obtain
[Y ¿ ¿ t−ρY t−1 ]=b0 [ 1−ρ ] +b1 [ X 1 t −ρ X 1 (t −1 ) ]+ …+b k [ X kt −ρX k (t −1) ] +[ut −ρu ¿ ¿ t−1]¿ ¿

The new error term ut −ρ ut −1 is equal to v t and hence is ex hypothesi a random


uncorrelated variable. Consequently if we know ρ we can apply OLS to the
transformed relationship.
¿ ¿ ¿
Y t =b 0+ b1 X 1 t +…+ bk X kt +v t
where
¿ ¿
Y t =Y t −^ρ Y t−1 X jt = X jt − ^ρ X j (t −1)
( j=1 ,2 , … , k )
v t=u t−ρ ut −1
Note that with the above transformation we are able to retain in our analysis only
n−1 observations since we lose one observation in the process. To avoid this loss, k.
R. Kadiyala has suggested the following transformation of the first observation
Y ¿1=Y 1 √ 1−ρ2∧ X ¿j 1= X j 1 √1− ρ2 ( j=1 , 2 ,.. , k )

METHODS FOR ESTIMATING THE AUTOCORRELATION PARAMETERS


Method 1: A Priori Information on ρ (or ρ ' s )
In most applied econometric research, when autocorrelation is suspected or
established on the basis of a formal test, the investigator makes some ‘reasonable’
guess about the value of the autoregressive coefficient(s) ρ , using his knowledge
and intuition about the relationship being studied.
Assumed that ρ=1, the appropriate transformation is to take the first differences of
the original data and apply the OLS to the transformed model

( Y t −Y t −1) =b1 ( X t− X t−1 ) + v t where v t =ut −ut −1


Proof
The original relationship is

Y t =b 0+ b1 X t +ut
where ut =ρ ut −1 + v t with v t satisfying all the usual assumptions of a random variable . Assuming
ρ=1
ut =¿u t−1 −v t ¿ ¿ vt =ut −ut −1
Now, lagging the original equation by one period and pre-multiplying by ρ we obtain

ρ Y t −1 =ρ b0 +b1 ρ X t −1+ ρut −1


¿ Y t −1=b0 +b 1 X t−1 +ut −1 (given that ρ=1)
Subtracting his from the original equation we have

( Y t −Y t −1) =b1 ( X t− X t−1 ) + v t ¿


¿

where v t=ut −ut −1 , serially independent by assumption .


It is obvious that when one assumes ρ=1and then takes the first differences of the
variables as the appropriate transformation, the computational procedure is greatly
simplified. This is the reason why first differences are popular in applied research
whenever autocorrelation is present in the original model.

Method II: Estimation of ρ from the statistic


Another crude method for the estimation of the coefficient of the autoregressive
scheme is to e solve for ρ the expression

d ⋍ 2(1− ρ)
¿
From the application of the Durbin-Watson test we obtain d which we may
substitute in the above expression and get
1 ¿
^ρ =1− d
2
If the sample is small the estimate of ^ρ will not be accurate, since the relationship
d ⋍ 2 (1−ρ ) holds asymptotically (for large samples).

Method III: The Cochrane- Orcutt Iterative Method


This method involves a gradual approximation (convergence) to the estimate of the
autocorrelation coefficient, ρ . It may be outlines as follows.

Step 1: Apply OLS to the original data and obtain estimates of the coefficients b 0∧b1
:

Y^ t =b^ 0+ b^ 1 X t

Compute the ‘first-round’ residuals e t =Y t− b^ 0− b^ 1 X t (t=1 , 2 , … ,n) and from these


estimates the ‘first-round’ estimate of ρ by using the earlier developed formula

^ρ =
∑ et e t −1 (t=2 , 3 ,… ,n)
∑ e 2t−1
Step 2: Use ^ρ to transform the original data and apply the OLS to the model
¿
( Y t −^ρ Y t −1 ) =b0 ( 1− ^ρ )+ b1 ( X t −^ρ X t −1) +u t
Denote the estimates of this ‘second-round’ by b^^ 0∧b^^ 1 (where b^^ 0is an estimate of
the intercept b 0 ( 1− ^ρ )).

Using b^^ 0∧b^^ 1 compute the ‘second-round’ residuals e^^ t =Y t− b^^ 0− b^^ 1 X t (t=1 , 2 , … ,n) and
from these estimates obtain the ‘second-round’ estimate of ρ
^ ^
^^ρ = ∑ e^ t e^ t −1 (t=2 , 3 ,… ,n)
∑ e^^ 2t−1
Step 3: Use ^^ρ to transform the original variables and apply OLS to the model

( Y t− ^^ρ Y t−1 )=b0 ( 1− ^^ρ ) +b1 ( X t− ^^ρ X t−1 )+u ¿∗¿


t
¿

We obtain the ‘third-round’ estimates, b^^ 0∧b^^ 1 , which yield the ‘third-round’ residuals

^ ^^ ^^
e^^ t =Y t− b^ 0− b^ 1 X t (t=1 , 2 , … ,n) from these estimates obtain the ‘second-round’
estimate of ρ
^ ^
^^ ∑ e^^ t e^^ t −1
^ρ = (t=2 , 3 ,… ,n)
^^ 2
∑ t−1
e ^
This iterative procedure is repeated until the value of the estimate of ρ converges.
^ ^
Some researchers stop at the ‘second round’ estimates, b^^ ¿ b^^ . This is then called a
0 1
two-stage Cochrane-Orcutt method.
An alternative approach is to use at each step of the iterative (for the first-order
autoregressive scheme) the Durbin-Watson d statistic to test the residuals for
autocorrelation. If they pass the test of zero autocorrelation, the iterations stop. If
not, the iterations proceed until the hypothesis of zero autocorrelation is accepted.
Tests of significance of the b^ ' s are conducted only at the final iteration.

Method IV: Durbin’s “two-step” method of estimation of ρ


Durbin has suggested the following two-step method for estimating ρ which is
applicable for any order of autoregressive scheme.
Assuming the original function is
Y t =b 0+ b1 X 1 t +b2 X 2 t + …+b k X kt +u t whereu t=f ( ut −1 , ut −2 , ... ) .

For simplicity, letut =ρ ut −1 + v t .


Stage 1: We start from the transformed model
(Y ¿ ¿ t−ρ Y t −1)=b 0 (1−ρ ) +b 1 ( X 1t −ρ X 1 ( t−1) ) + b2 ( X 2 t −ρ X 2 (t −2 ) ) +… ¿

+...+b k (X ¿ ¿ kt−ρ X k (t −k) )+(u ¿ ¿ t−ρut −1)¿ ¿


We write this equation as follows
Y t =ρ Y t−1 +b 0 ( 1−ρ ) +b1 X 1 t −b1 ρ X 1 (t −1 )+ … +b k X kt −b k ρ X k(t−k )+ v t

Setting b 0 ( 1− ρ )=a 0

b 1=a1
b 1 ρ=a2, etc.
We may write the equation in the following form
Y t =a 0+ ρY t −1+ a1 X 1 t +a 2 X 1(t−1) +…+ v t
Applying least squares to this equation we obtain an estimate of ρ , ^ρ , which is the
coefficient of the lagged variable Y t −1.

Stage 2: we use the estimate ^ρ to obtain the transformed variables


¿
(Y t −^ρ Y t −1 )=Y
¿
(X 1t −ρ X 1 (t −1) )=X 1

¿
(X ¿ ¿ kt−ρ X k (t −k) )=X k ¿
Which we use in order to estimate the parameters of the original relationship
¿ ¿ ¿ ¿
Y t =b 0 +b1 X 1 +b 2 X 2 +…+ bk X k + v t
Durbin’s method provides estimates which have optimal asymptotic properties and
are more efficient for samples of all sizes.

Example: Table below includes data on imports and the gross national product of the UK.
Applying OLS to these observations we obtain the following imports function

^z t =−2,461+0.2795 X t r 2 =0.983
Sb^ ( 250 ) ( 0.01 )
i

It is found that ∑ et 2=573 , 071.44 ∑ (e t −et −1)2=537 ,220.22


The Durbin – Watson d statistic from the above sample is

¿
d=
∑ 2
(e t −e t−1 ) 573071.44
= =0.937
∑ e t2 537220.22

Year Imports GNP Estimated Error


t Zt Xt Imports - ^
Zt e t =zt −^z t e t−1 e t −e t−1 (e ¿ ¿ t−e t−1 )2 ¿ e t2
1950 3748 21777 3,626 122 14,964.26
1951 4010 22418 3,805 205 122 83 6,862.55 42,094.32
1952 3711 22308 3,774 (63) 205 (268) 71,960.75 3,979.84
1953 4004 23319 4,057 (53) (63) 10 108.69 2,773.13
1954 4151 24180 4,297 (146) (53) (94) 8,770.23 21,406.62
1955 4569 24893 4,497 72 (146) 219 47,836.91 5,242.70
1956 4582 25310 4,613 (31) 72 (104) 10,722.91 970.01
1957 4697 25799 4,750 (53) (31) (22) 469.83 2,790.01
1958 4753 25886 4,774 (21) (53) 32 1,003.84 446.77
1959 5062 26868 5,049 13 (21) 35 1,192.39 179.40
1960 5669 28134 5,402 267 13 253 64,086.44 71,047.30
1961 5628 29091 5,670 (42) 267 (308) 95,160.84 1,758.50
1962 5736 29450 5,770 (34) (42) 8 58.67 1,174.78
1963 5946 30705 6,121 (175) (34) (141) 19,816.90 30,641.63
1964 6501 32372 6,587 (86) (175) 89 7,934.09 7,391.53
1965 6549 33152 6,805 (256) (86) (170) 28,903.40 65,527.81
1966 6705 33764 6,976 (271) (256) (15) 226.62 73,461.60
1967 7104 34411 7,157 (53) (271) 218 47,595.31 2,795.71
1968 7609 35429 7,441 168 (53) 220 48,606.58 28,087.92
1969 8100 36200 7,657 443 168 276 75,903.28 196,337.61
537,220.22 573,071.44

From the Durbin-Watson table, with 5% level of significance, n=20 observations, k=1
independent variable, the significance points of d L∧d u are:

d L=1.20 du =1.41
¿
Since d < d L , we conclude there is positive autocorrelation in the imports function. We will
use two of the previous developed corrective procedures, namely; an alternative test for
autocorrelation and Durbin’s two-step method.

A. We regress e t on e t−1, and e t−2, and obtain the following results

e^ t =0.53 et −1
(0.26)
e^ t =0.55 et −1−0.21 e t−2
( 0.29 ) (0.29)
One could experiment with other forms of autocorrelation, but we limit our example
to the first order and second-order autoregressive schemes.
The above regression indicates a first-order autoregressive, since ^ρ1 is just
^ 2 not significant at the 5% level.
significant but ρ

Using the estimate ^ρ =0.53 we obtain the transformed variables and applying OLS to
the transformed data we obtain

Y^ 1=−1394.36 +0.296 X t R =0.949


¿ ¿ 2

( 235.35 ) ( 0.02 ) d=1.87

B. Using Durbin’s two-stage method we find


Y t =a 0+ ρY t −1+ a1 X t + a2 X t−1 + v t
¿
Y t =−1107.67+0.6475 Y t−1 +0.3403 X t−0.2345 X t−1
( 692.8 ) ( 0.29 ) ( 0.10 ) ( 0.13)
From ρ¿ =0.6475 we obtain the transformed data

Y ¿t =( Y t −ρ¿ Y t −1 )=Y t −0.6475 Y t −1

X ¿t =( X t −ρ¿ X t −1) =X t −0.6475 X t−1


Applying OLS to the above transformed data we find
^¿
Y^
^ 1=−372.18+ 0.228 X ¿t R2=0.872

( 223.0 ) ( 0.02 ) d=2.11


We observed that with the appropriate transformations the value of d comes close
to the ‘crucial’ value of 2 (which corresponds to zero autocorrelation).

Exercise:
The data of the following table are the OLS residuals of a certain relationship (
Y =b0 +b 1 X +u ¿ . Calculate d and estimate ρ with any two of the methods discussed.
Do your results support the use of first differences for the estimation of b 0∧b1 ?

Residua
Residual Residual l Residual
Year ei Year ei Year ei Year ei
1950 1 1955 -0.3 1960 -4.6 1965 -2.6
1951 -1.5 1956 -3.1 1961 -4.3 1966 -2.3
1952 -0.7 1957 -5.5 1962 1.9 1967 -0.9
1953 -1.3 1958 -4.7 1963 1.9 1968 1.4
1954 -4.6 1959 -1.3 1964 2.9 1969 3.7

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy