Lecture Note 6 Asymptotic Properties PDF
Lecture Note 6 Asymptotic Properties PDF
P(|Wn - θ| > e) → 0 as n → ∞.
This says that the probability that the absolute difference between Wn and θ being larger
than e goes to zero as n gets bigger. Which means that this probability could be non-zero
while n is not large. For instance, let’s say that we are interested in finding the average
income of American people and take small samples randomly. Let’s assume that the
small samples include Bill Gates by chance. The sample mean income is way over the
population average. Thus, when sample sizes are small, the probability that the
difference between the sample and population averages is larger than e, which is any
positive number, can be non-zero. However, the difference between the sample and
population averages would be smaller as the sample size gets bigger (as long as the
sampling is properly done). As a result, as the sample size goes to infinity, the
probability that the difference between the two averages is bigger than e (no matter how
small e is) becomes zero.
plim (Wn) = θ.
Under the finite-sample properties, we say that Wn is unbiased, E(Wn) =θ. Under the
asymptotic properties, we say that Wn is consistent because Wn converges to θ as n gets
larger.
βˆ = ( X ′X ) −1 X ′Y
βˆ = β + ( X ′X ) −1 X ′u
1
Here, we assume that
1
p lim X ′X = Q .
n
This assumption is not a difficult one to make since the law of large numbers suggests
1
that the each component of X ′X goes to the mean values of X ′X . And also we assume
n
that Q-1 exists. From E2, we have
1
p lim X ′ u = 0 .
n
Thus,
p lim βˆ = β
Nest, we focus on the asymmetric inference of the OLS estimator. To obtain the
asymptotic distribution of the OLS estimator, we first derive the limit distribution of the
OLS estimators by multiplying n on the OLS estimators:
−1
1 1
βˆ = β + X ′X X ′ u
n n
−1
1 1
n ( βˆ − β ) = X ′X X ′ u
n n
1
From E4, the probability limit of u u ′ goes to σ 2 I , and we assumed plim of X ′X is Q.
n
Thus,
σ 2
−1
= Q X ′X Q −1
n
2 −1 −1
= σ Q QQ
= σ 2 Q −1
2
ˆ − Β) ~ d N [0, σ 2 Q −1 ] .
n (Β
From this, we can obtain the asymptotically distribution of the OLS estimator by
multiplying n and manipulating:
ˆ ~ a N [Β, σ 2 N −1Q −1 ] .
Β
∑ (x i − x ) yi
A bivariate model: y i = β 0 + β1 xi1 + u i and βˆ1 = i =1
n
∑ (x
i =1
i − x)2
i =1
Under the assumption of zero conditional mean (SLR 3: E(u|x) = 0), we can separate the
expectation of x and u:
n
∑ ( xi − x ) E (u i )
E ( βˆ1 ) = β 1 + i =1 n .
∑ ( xi − x )
2
i =1
Thus we need the SLR 3 to show the OLS estimator is unbiased.
Now, suppose we have a violation of SLR 3 and cannot show the unbiasedness of the
OLS estimator. We consider a consistency of the OLS estimator.
n
∑ ( xi − x ) u i
p lim βˆ1 = p lim β 1 + p lim i =1n
∑ ( xi − x )
2
i =1
1 n
p lim ∑ ( xi − x ) u i
p lim βˆ1 = β1 + n i =1
1 n
p lim ∑ ( xi − x ) 2
n i =1
cov( x, u )
p lim βˆ1 = β1 +
var( x )
3
p lim βˆ1 = β 1 if cov( x, u ) = 0
Thus, as long as the covariance between x and u is zero, the OLS estimator of a bivariate
model consistent.
End of Example 6-1