TA_session_05

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Econometrics I

TA Session 5

Jukina HATAKEYAMA
May 14, 2024

Contents
1 Asymptotic Properties of OLSE 2
1.1 Single regression model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Test Statistics 2
2.1 Chi-Square Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Delta Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3 Test Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.4 Review of t Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.4.1 General Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.4.2 t Statistics for OLSE . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.5 R Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

3 Appendix 9
3.1 R code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9


E-mail: u868710a@ecs.osaka-u.ac.jp

1
1 Asymptotic Properties of OLSE
In this section, we review the asymptotic properties of the OLSE.

1.1 Single regression model


Suppose the single regression

yi = α + βxi + ui , (1)

i.i.d
where ui ∼ NR (0, σ 2 ).
The OLSE is can be written as

n
β̂ = β + w i ui . (2)
i=1
∑n
Recall that wi = (xi − x̄)/ i=1 (xi − x̄)2 . Then, from the central limit theorem, we obtain
∑n ∑
wi ui − E[ ni=1 wi ui ] β̂ − β d

i=1
∑n = √∑ n −−−→ NR (0, 1), (3)
Var( i=1 wi ui ) σ/ i=1 (x i − x̄) 2 n→∞

∑n ∑n ∑ ∑ ∑
where E[ i=1 wi ui ] = 0, Var( i=1 wi ui ) = σ 2 ni=1 wi2 = σ 2 / ni=1 (xi − x̄)2 and ni=1 wi ui =
β̂ − β. Additionally, the LLN implies that:

1∑
n
p
(xi − x̄)2 −−−→ E[(xi − µ)2 ]. (4)
n i=1 n→∞

By using (4) and (3), we can apply the CLT as follows:



n(β̂ − β) d
∑n −−−→ N (0, 1). (5)
i=1 (xi − x̄) )
(σ 2 2 −1/2 n→∞

Therefore, we can derive following relationship:

β̂ − β −−−→ N (0, σ 2 E[(xi − x̄)2 ]−1 ).


d
n→∞

2 Test Statistics
Ex.) Constraint OLS:

y = xb + u where Rb = q, (6)

where b1 + b2 = 1, R = (1, 1) and q = 1.


We can check whether Rb − q = 0 or not by the Wald test, . Now, we are going to explain a
similar statistics, so called χ2 statistics, which leads to the Wald statistics.

2
2.1 Chi-Square Distribution
We first review the chi-square distribution since the Wald statistics follows this distribution.
 
Theorem 2.1. Suppose that a random variable X ∈ R n×1
follows a normal distribution
X ∼ N (µ, V ), where V ∈ Rn×n is positive definite. Then, a random variable W0 =
(X − µ)′ V −1 (X − µ) follows a χ2 (n) distribution. Its probability density function is given
as follows:
1 n
−1 − w0
fn (w0 ) = n w 2
0 e 2,
2 Γ( n2 )
2

where Γ( n2 ) is a gamma function.


 
Keep in mind that χ2 distribution with n freedom is defined by the summation of the square
of n random variables which follows the standard normal distribution, NR (0, 1). By using
this definition, the proof of the above theorem is not so difficult.
Proof. Suppose that V can be decomposed as:

V = C ′ ΛC.

Then, we can calculate V = V 1/2 V 1/2 where V 1/2 = C ′ Λ1/2 C. In addition, we can say
Z ≡ V −1/2 (X − µ) ∑ ∼ NRdim(X) (0, I) by the properties of the multivariate normal distribution.
Let W0 ≡ Z ′ Z = ni=1 Zi2 . Because each Zi follows the standard normal distribution, Zi2
follows χ2 (1) distribution. Therefore, W0 ∼ χ2 (n) is proven.

3
2.2 Delta Method
Consider the case of a parameter θ0 to be estimated and sequence of its estimator θˆn . If θˆn
√ d
has an asymptotic normality, n(θˆn − θ0 ) −
→ N (0, Σ), we can derive the following theorem.
 
Theorem 2.2. Suppose any continuous, differentiable function g : Rd → Rs . Let
X1 , X2 , . . . , Xn a sequence of d-dimensional random variables. If θˆn has an asymptotic
normality, we can state that:

n[g(θˆn ) − g(θ0 )] −−−→ NRdim(g) (0, Dg (θ0 )ΣDg′ (θ0 )),
d
(7)
n→∞

where Dg (θ) is a Jacobian matrtix of θ.


 
Suppose that there is a parameter θ̄ ∈ (θ0 , θˆn ). Then, we can use a mean value expansion:

g(θˆn ) = g(θ0 ) + Dg (θ̄)(θˆn − θ0 ).

By using this expression, we can prove (7).


Proof. Because of the mean value expansion, we can state:
√ √
n[g(θˆn ) − g(θ0 )] = nDg (θ̄)(θˆn − θ0 )
√ √
= nDg (θ0 )(θˆn − θ0 ) + n[Dg (θ̄) − Dg (θ0 )](θˆn − θ0 ) (8)
p p
Here, if θˆn −−−→ θ0 is given, θ̄ −−−→ θ0 is established. Therefore, the second term of (RHS)
n→∞ n→∞
in the (8) is calculated as:

[Dg (θ̄) − Dg (θ0 )] n(θˆn − θ0 ) = op (1)Op (1) = op (1), (9)

because n(θˆn − θ0 ) converges in distribution.1 By the (8) and the (9), we can derive:
√ √
n[g(θˆn ) − g(θ0 )] = nDg (θ0 )(θˆn − θ0 ) + op (1). (10)

Since we can apply the property of multivariate normal distribution in this equation, we can
√ d
say nDg (θ0 )(θˆn − θ0 ) −−−→ NRdim(g) (0, Dg (θ0 )ΣDg′ (θ0 )). We can now apply the asymptotic
n→∞
equivalence lemma of the main text.
 
d
Lemma 2.3. Let {xn } and {yn } be sequences of n × 1 random vectors. If zn −−−→ z
n→∞
p d
and xn − zn −−−→ 0, then xn −−−→ z.
n→∞ n→∞
 
By using this lemma, we can derive (7).

1
The lemma 3.5 in the main textbook implies that a K × 1 vector xn is Op (1) if xn converges x in
distribution.

4
2.3 Test Statistics
Suppose that {b̂n : n = 1, 2, . . . } be a sequence of estimators which satisfies:
√ d
n(bˆn − b) −−−→ NRdim(bn ) (0, V )
n→∞


where V > 0 is the asymptotic variance covariance matrix of nR(bˆn − b) and R ∈ Rq×k
with q ≤ K and rank(R) = q. Then, following lemma is derived.
 
√ d
Lemma 2.4. In the above settings, nR(b̂ − b) −−−→ NRq (0, RV R′ ) and:
n→∞

√ √
[ nR(bˆn − b)]′ (RV R′ )−1 [ nR(bˆn − b)] −−−→ χ2 (q).
d
n→∞

In addition, if V̂ (the estimator of V ) has consistency, then:


√ √
[ nR(bˆn − b)]′ (RV̂ R′ )−1 [ nR(bˆn − b)] −−−→ χ2 (q).
d
n→∞
 
√ d √ d
→ N (0, V ) as n → ∞, then nR(bˆn − b) −−−→ N (0, RV R′ ) is derived.
Proof. 2 If n(bˆn − b) −
n→∞
Assume that x is written as follows:
√ √
x = [ nR(bˆn − b)]′ Q−1 ˆ
n [ nR(bn − b)],

√ d
where Qn = RV̂ R′ (V̂ is a consistent estimator of V ) and cn = nR(bˆn − b). Then, cn −−−→ c
n→∞
d
where c ∼ NRc (0, RV R′ ) and Qn − → Q where Q = RV R′ . Because R is full rank and V is
d
→ c′ Q−1 c ∼ χ2 (n) by the Theorem 3.1.
positive definite, Q is invertible. Therefore, W −

2.4 Review of t Statistics


In Econometrics class, t distribution is applied to the statistical test and the confidence
interval of OLSE. Therfore, we shortly review of this statistics.

2.4.1 General Definition


Suppose the case that the sequence of random variables Xi (i = 1, · · · , n). Each Xi follows
i.i.d. normal distribution such that N (µ, σ 2 ). By applying CLT, we have:

X̄ − µ d
√ −−−→ Z, (11)
2
σ /n n→∞

where Z follows a standard normal distribution.If we do not know the true variance σ 2 , we
use an estimator of the sample variance, s2 :

X̄ − µ
t= √ , (12)
s2 /n

2
This proof is explained in Chapter 2 of Fumio, Hayashi(2000) ”ECONOMETRICS”, PRINCETON UNI-
VERSITY PRESS.

5
where s2 = 1
n−1
[(X1 − X̄)2 + · · · + (Xn − X̄)2 ].Now, we can rewrite (12) as follows:

X̄ − µ (n − 1)s2
t= √ / /(n − 1). (13)
σ 2 /n σ2

(n−1)s2
The numerator follows standard normal distribution and a part of denominator, σ2
,
follows χ2 distribution with n − 1 freedom.

2.4.2 t Statistics for OLSE


In the case of OLSE of (2), as we learn in Section 1, the following relationship is established:

β̂ − β d
√∑ n −−−→ NR (0, 1).
i=1 (xi − x̄)
σ/ 2 n→∞

In this case, replacing σ by its estimator σ̂, we obtain t statistics such as:

β̂ − β
t= √∑n ∼ t(n − 2). (14)
i=1 (xi − x̄)
σ̂/ 2

We will explain the reason why we can derive above equation. At first, we must consider
how to derive σ̂ 2 .
 
Theorem 2.5. Under the assumption of the classical OLS model, the (unbiased) esti-
mator of σ 2 is given as follows:

1 ∑ 2
n
σˆ2 = ûi . (15)
n − 2 i=1
 
∑n
Proof. In the (15), we can say i=1 ûi 2 = û′ û and the residual is rewritten as follows:

û = Mx û = [Iz − x(x′ x)−1 x′ ]u. (16)

Therefore, û′ û is calculated as follows:

û′ û = u′ Mz u
= tr(u′ Mx u)
= tr(Mx u′ u) (17)

Then, the expectation of (17) is given as follows:

E(û′ û) = E[tr(Mx u′ u)]


= tr[E(Mx u′ u)]
= tr[E(E(Mx u′ u|x))]
= tr[E(Mx E(uu′ |x))]
= σ 2 tr[E(Mx )] (18)

6
In the above equation, tr[E(Mz )] is represented as follows:

tr[E(Mx )] = tr[In − E(x(x′ x)−1 x′ )]


= n − tr[E(x(x′ x)−1 x′ )]
= n − tr[(x′ x)−1 x′ x]
= n − tr[Ix ]
= n − 2. (19)

From these equations, above theorem is proven. Note that this proof follows the exact same
steps as in the case of K covariates.
Finally, we can confirm that t statistic in (14) follows a t distribution whose degrees of
freedom is equal to n − 2.

β̂ − β (n − 2)σ̂ 2
t= √ ∑ / /(n − 2) (20)
σ 2 / ni=1 (xi − x̄)2 σ2

(n−2)σ̂ 2
The numerator follows standard normal distribution and a part of denominator, σ2
,
follows χ2 distribution whose degrees of freedom is equal to n − 2.

7
2.5 R Exercise
In this subsection, we will explain how to use R. Today, we use data of the speed of cars and
the distances taken to stop recorded in the 1920s. Consider the following regression model:

(distance)i = a + b(speed)i + ui .

The result of this estimation is easily outputted by the stargazer package. This package
makes a table of estimation by tex.

Table 1:
Dependent variable:
dist
speed 3.932∗∗∗
(0.416)
t = 9.464
p = 0.000
Constant −17.579∗∗
(6.758)
t = −2.601
p = 0.013
Observations 50
R2 0.651
Adjusted R2 0.644
Residual Std. Error 15.380 (df = 48)
F Statistic 89.567∗∗∗ (df = 1; 48) (p = 0.000)
∗ ∗∗ ∗∗∗
Note: p<0.1; p<0.05; p<0.01

We can make a figure of a regression line by pdf file. R code is given in the Appendix.
The lm function is a default function of R.

8
3 Appendix
3.1 R code
library ( stargazer )
data ( cars )
# Let us check the single regression model by using " cars " data set .
fix ( cars )

speed < - cars [ ,1]


dist < - cars [ ,2]

cars . lm < - lm ( dist ~ speed )


stargazer ( cars . lm , style =" all " , type =" latex ")
plot ( cars )
abline ( cars . lm , lwd =1 , col =" blue ")

# A command to make pdf file .


pdf (" carsdata . pdf ")
plot ( cars )
par ( new = T )
abline ( cars . lm , lwd =1 , col =" blue ")
dev . off ()

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy