1 C-D Problem: 1.1 Unknown
1 C-D Problem: 1.1 Unknown
1 C-D Problem: 1.1 Unknown
1
1.1
C-D problem
Unknown
max E() = E(P yi rki ) = E(P AKi e i rki )
1 FOC:P Aki E(e ) r = 0
1 r ] 1 P AE(e )
=e
=0
1.2
Firms know
max = P yi rki = P AKi e i rki
1 FOC:P Aki e r = 0
ki = [
1 r ] 1 P Ae
Estimation of
b = = = = = (xi x)(yi y ) 2 (xi x) (xi yi xi y xy + xy ) i 2 (xi 2xi x + x2 ) (xi yi ) ny ny + ny x x x 2 2 + n2 (xi ) 2n x x 2 n (xi yi ) n xy 2 2 x2 n (xi ) n n (xi yi ) xi yi n (x2 ) ( xi )2 i (9) (10) (11) (12) (13)
MLE
i
Assuming
N (0, 2 ), and
LF =
i
(14)
Taking the log of last equation wont change the value of which the likelihood function achieves maximum due to the increasing of log transformation, max, LLF =
i
(yi xi )2 ) (log( 2) 2 2
(15)
Since (log( 2) includes no information on and , so the previous maximizition problem is identical to the following, max
i
(yi xi )2
(16)
(yi xi )2
(17)
To show this mathematically, you need to know law of iterated expectation, which is, E(E(x|y)) = E(x) 2 (18)
proof: E(E(x|y)) = = = fY (y) fY (y) xf (x|y)dxdy x f (x, y) dxdy f (y) (19) (20) (21) (22)
xf (x, y)dxdy
Now remind the logic I used to show that is show I: E(|x) = 0 is implying 0 covariance, II: 0 covariance is identical to E(x) = 0, III: 0 covariance can not imply E(|x) = 0. Proof of I: rst we knew that E(|x) = 0 implies E(|x) = E() = 0 E(|x) = = = f (|x)d f (x, ) d f (x) f (x, )d (23) (24) (25) (26)
1 f (x) = 0 so we have
f (x, )d = 0, which will imply that E(|x) = E() = 0. E(|x) = E( = x |x) x (27) (28)
1 E(x|x) x
which implies, xE(|x) = E(x|x) Taking expectaion on both side, E(E(x|x)) = E(x) = E(xE(|x)) = E(xE()) = E(x)E() (30) (31) (32) (33) (29)
By the denition of cov(x, ) = E(x) E(x)E(), from last equation we know that assuming E(|x) = 0, we have 0 covariance, so we have showed I. 3
Since we know that we could assume E() = 0, and cov(x, ) = E(x) E(x)E() = 0, then E(x) = 0. So we have shown II. The only last thing is III. We show this by nd a counter example. Assuming E(|x) = x2 , and x N (0, 1). So we know that E(x) = 0, E() = E(E(|x)) = E(x2 ) = 1, since x2 2 (1). E(x) = E(xE(|x)) = E(x3 ) = 0, since skewness of normal distribution is 0. So, cov(x, ) = E(x) E(x)E() = 0 0 = 0. However, E(|x) = x2 = 0. We show all 3 parts.