Ch2 Slides Edited
Ch2 Slides Edited
Ch2 Slides Edited
1
‘Introductory Econometrics for Finance’ © Chris Brooks 2008
Regression
• Denote the dependent variable by y and the independent variable(s) by x1, x2, ... , xk
where there are k independent variables.
• Note that there can be many x variables but we will limit ourselves to the case
where there is only one x variable to start with. In our set-up, there is only one y
variable.
• For simplicity, say k=1. This is the situation where y depends on only one x
variable.
• Suppose that we have the following data on the excess returns on a fund
manager’s portfolio (“fund XXX”) together with the excess returns on a
market index:
Year, t Excess return Excess return on market index
= rXXX,t – rft = rmt - rft
1 17.8 13.7
2 39.0 23.2
3 12.8 6.9
4 24.2 16.8
5 17.2 12.3
• We have some intuition that the beta on this fund is positive, and we
therefore want to find whether there appears to be a relationship between x
and y given the data that we have. The first stage would be to form a scatter
plot of the two variables.
45
40
Excess return on fund XXX
35
30
25
20
15
10
5
0
0 5 10 15 20 25
Excess return on market portfolio
• The most common method used to fit a line to the data is known as
OLS (ordinary least squares).
• What we actually do is take each distance and square it (i.e. take the
area of each of the squares in the diagram) and minimise the total sum
of the squares (hence least squares).
yi
û i
ŷ i
xi x
• But what was ût ? It was the difference between the actual point and
the line, yt - ŷt .
•
So minimising t
y ˆ
y t 2
is equivalent to minimising t
ˆ
u 2
yˆ t 1.74 1.64 x t
• Question: If an analyst tells you that she expects the market to yield a return
20% higher than the risk-free rate next year, what would you expect the return
on fund XXX to be?
• Solution: We can say that the expected value of y = “-1.74 + 1.64 * value of
x”, so plug x = 20 into the equation to get the expected value for y:
yˆ i 1.74 1.64 20 31.06
0 x
• Linear in the parameters means that the parameters are not multiplied
together, divided, squared or cubed etc.
Yt e X t e ut ln Yt ln X t ut
• Then let yt=ln Yt and xt=ln Xt
yt xt ut
• Additional Assumption
5. ut is normally distributed
• Consistent
The least squares estimators and
are consistent. That is, the estimates will converge to
their true values as the sample size increases to infinity. Need the assumptions
E(xtut)=0 and Var(ut)=2 < to prove this. Consistency implies that
• Unbiased T
lim Pr ˆ 0 0
The least squares estimates of and are unbiased. That is E( )= and E( )=
Thus on average the estimated value will be equal to the true values. To prove this also
requires the assumption that E(ut)=0. Unbiasedness is a stronger condition than
consistency.
• Efficiency
An estimator of parameter is said to be efficient if it is unbiased and no other unbiased
estimator has a smaller variance. If the estimator is efficient, we are minimising the
probability that it is a long way off from the true value of .
xt
2
SE (ˆ ) s
t
x 2
s ,
T ( xt x ) 2
T x T x
t
2 2 2
1 1
SE ( ˆ ) s s
( xt x ) 2
where s is the estimated standard deviation of the residuals. t
x 2
T x 2
T 2
where uˆ 2
t is the residual sum of squares and T is the sample size.
2. The sum of the squares of x about their mean appears in both formulae.
The larger the sum of squares, the smaller the coefficient variances.
y
y
y
y
0 x x 0 x x
3. The larger the sample size, T, the smaller will be the coefficient
variances. T appears explicitly in SE( ) and implicitly in SE( ).
uˆ t2 130.6
• SE(regression), s 2.55
T 2 20
3919654
SE ( ) 2.55 * 3.35
22 3919654 22 416.5 2
1
SE ( ) 2.55 * 0.0079
3919654 22 416.5 2
• We now write the results as
yˆ t 59.12 0.35 xt
(3.35) (0.0079)
• We can use the information in the sample to make inferences about the
population.
• We will always have two hypotheses that go together, the null hypothesis
(denoted H0) and the alternative hypothesis (denoted H1).
• The null hypothesis is the statement or the statistical hypothesis that is actually
being tested. The alternative hypothesis represents the remaining outcomes of
interest.
• For example, suppose given the regression results above, we are interested in the
hypothesis that the true value of is in fact 0.5. We would use the notation
H0 : = 0.5
H1 : 0.5
This would be known as a two sided test.
• There are two ways to conduct a hypothesis test: via the test of
significance approach or via the confidence interval approach.
• Since the least squares estimators are linear combinations of the random variables
i.e.
wt yt
• The weighted sum of normal random variables is also normally distributed, so
N(, Var())
N(, Var())
• What if the errors are not normally distributed? Will the parameter estimates still
be normally distributed?
• Yes, if the other assumptions of the CLRM hold, and the sample size is
sufficiently large.
ˆ ˆ
~ N 0,1 and ~ N 0,1
var var
ˆ ˆ
~ tT 2 and ~ tT 2
SE (ˆ ) ˆ
SE ( )
3. We need some tabulated distribution with which to compare the estimated test
statistics. Test statistics derived in this way can be shown to follow a t-
distribution with T-2 degrees of freedom.
As the number of degrees of freedom increases, we need to be less cautious in
our approach since we can be more sure that our results are robust.
f(x)
95% non-rejection
region 5% rejection region
f(x)
7. Finally perform the test. If the test statistic lies in the rejection
region then reject the null hypothesis (H0), else do not reject H0.
• You should all be familiar with the normal distribution and its
characteristic “bell” shape.
• We can scale a normal variate to have zero mean and unit variance by
subtracting its mean and dividing by its standard deviation.
normal distribution
t-distribution
• The reason for using the t-distribution rather than the standard normal is that
we had to estimate ,2the variance of the disturbances.
3. Use the t-tables to find the appropriate critical value, which will again have T-2
degrees of freedom.
5. Perform the test: If the hypothesised value of (*) lies outside the confidence
interval, then reject the null hypothesis that = *, otherwise do not reject the null.
• Note that the Test of Significance and Confidence Interval approaches always give the
same answer.
• Under the test of significance approach, we would not reject H0 that = * if the test
statistic lies within the non-rejection region, i.e. if
*
t crit t crit
SE ( )
• Rearranging, we would not reject if
t crit SE ( ˆ ) ˆ * t crit SE ( ˆ )
• But this is just theˆrule
under
t the ( ˆ ) interval
SEconfidence * ˆapproach.
t SE ( ˆ )
crit crit
yˆ t 20.3 0.5091xt
, T=22
(14.38) (0.2561)
• Using both the test of significance and confidence interval approaches,
test the hypothesis that =1 against a two-sided alternative.
• The first step is to obtain the critical value. We want tcrit = t20;5%
f(x)
-2.086 +2.086
• Note that we can test these with the confidence interval approach.
For interest (!), test
H0 : = 0
vs. H1 : 0
H0 : = 2
vs. H1 : 2
• For example, say we wanted to use a 10% size of test. Using the test of
significance approach, *
test stat
SE ( )
05091
. 1
1917
.
0.2561
as above. The only thing that changes is the critical t-value.
f(x)
-1.725 +1.725
• t20;10% = 1.725. So now, as the test statistic lies in the rejection region,
we would reject H0.
• If we reject the null hypothesis at the 5% level, we say that the result
of the test is statistically significant.
• The probability of a type I error is just , the significance level or size of test we
chose. To see this, recall what we said significance at the 5% level meant: it is
only 5% likely that a result as or more extreme as this could have occurred purely
by chance.
• Note that there is no chance for a free lunch here! What happens if we reduce the
size of the test (e.g. from a 5% test to a 1% test)? We reduce the chances of
making a type I error ... but we also reduce the probability that we will reject the
null hypothesis at all, so we increase the probability of a type II error:
less likely
to falsely reject
Reduce size more strict reject null
of test criterion for hypothesis more likely to
rejection less often incorrectly not
reject
• So there is always a trade off between type I and type II errors when choosing a
significance level. The only way we can reduce the chances of both is to increase
the sample size.
‘Introductory Econometrics for Finance’ © Chris Brooks 2008
A Special Type of Hypothesis Test: The t-ratio
• Suppose that we have the following parameter estimates, standard errors and t-
ratios for an intercept and slope respectively.
Coefficient 1.10 -4.40
SE 1.35 0.96
t-ratio 0.81 -4.63
• If we reject H0, we say that the result is significant. If the coefficient is not
“significant” (e.g. the intercept coefficient in the last regression above), then
it means that the variable is not helping to explain variations in y. Variables
that are not significant are usually removed from the regression model.
• In practice there are good statistical reasons for always having a constant
even if it is not y
significant. Look at what happens if no intercept is included:
t
x
t
‘Introductory Econometrics for Finance’ © Chris Brooks 2008
An Example of the Use of a Simple t-test to Test a
Theory in Finance
• The Data: Annual Returns on the portfolios of 115 mutual funds from
1945-1964.
• The model: j =j
R jt R ft for 1,
…,j (115
Rmt R ft ) u jt
Comments
• Small samples
• No diagnostic checks of model adequacy
• If the test statistic is large in absolute value, the p-value will be small, and vice
versa. The p-value gives the plausibility of the null hypothesis.