Bahan Univariate Linear Regression
Bahan Univariate Linear Regression
Bahan Univariate Linear Regression
• Denote the dependent variable by y and the independent variable(s) by x1, x2, ... , xk
where there are k independent variables.
• Note that there can be many x variables but we will limit ourselves to the case
where there is only one x variable to start with. In our set-up, there is only one y
variable.
• For simplicity, say k=1. This is the situation where y depends on only one x
variable.
• Suppose that we have the following data on the excess returns on a fund
manager’s portfolio (“fund XXX”) together with the excess returns on a
market index:
Year, t Excess return Excess return on market index
= rXXX,t – rft = rmt - rft
1 17.8 13.7
2 39.0 23.2
3 12.8 6.9
4 24.2 16.8
5 17.2 12.3
• We have some intuition that the beta on this fund is positive, and we
therefore want to find whether there appears to be a relationship between
x and y given the data that we have. The first stage would be to form a
scatter plot of the two variables.
45
40
Excess return on fund XXX
35
30
25
20
15
10
5
0
0 5 10 15 20 25
Excess return on market portfolio
‘Introductory Econometrics for Finance’ © Chris Brooks 2013 7
Finding a Line of Best Fit
x
‘Introductory Econometrics for Finance’ © Chris Brooks 2013 10
Ordinary Least Squares
• The most common method used to fit a line to the data is known as
OLS (ordinary least squares).
• What we actually do is take each distance and square it (i.e. take the
area of each of the squares in the diagram) and minimise the total sum
of the squares (hence least squares).
yi
û i
ŷ i
xi x
• But what was ût ? It was the difference between the actual point and
the line, yt - ŷt .
So minimising t is equivalent to minimising t
2 2
• y ˆ
y t
ˆ
u
with respect to and .
ˆ t ˆ ˆxt , so let
• But y L ( yt yˆ t ) 2 ( yt ˆ ˆxt ) 2
t i
L (2)
2 xt ( yt ˆ ˆxt ) 0
ˆ t
• But y t Ty and x t Tx .
t t t
x y y x ˆ
x t
x ˆ
t 0
x
2
t t
x y T y x ˆ
T x 2
ˆ
t 0
x
2
• Rearranging for ,
• So overall we have
ˆ
xt yt Tx y
andˆ y ˆx
xt2 Tx 2
• Question: If an analyst tells you that she expects the market to yield a return
20% higher than the risk-free rate next year, what would you expect the return
on fund XXX to be?
• Solution: We can say that the expected value of y = “-1.74 + 1.64 * value of
x”, so plug x = 20 into
yˆ the
1.equation
74 1.64 to20get the.06
31 expected value for y:
i
0 x
‘Introductory Econometrics for Finance’ © Chris Brooks 2013 18
The Population and the Sample
• Linear in the parameters means that the parameters are not multiplied
together, divided, squared or cubed etc.
Yt e X t e ut ln Yt ln X t ut
• Then let yt=ln Yt and xt=ln Xt
yt xt ut
‘Introductory Econometrics for Finance’ © Chris Brooks 2013 21
Linear and Non-linear Models
• Additional Assumption
5. ut is normally distributed
• Consistent
The least squares estimators and are consistent. That is, the estimates will
converge to their true values as the sample size increases to infinity. Need the
assumptions E(xtut)=0 and Var(ut)=2 < to prove this. Consistency implies that
lim Pr ˆ 0 0
T
• Unbiased
The least squares estimates of and
unbiased. That is E( )= and E( )=
are
Thus on average the estimated value will be equal to the true values. To prove this
also requires the assumption that E(ut)=0. Unbiasedness is a stronger condition than
consistency.
• Efficiency
An estimator of parameter is said to be efficient if it is unbiased and no other
unbiased estimator has a smaller variance. If the estimator is efficient, we are
minimising the probability that it is a long way off from the true value of .
t xt
2 2
x
SE (ˆ ) s s ,
T ( xt x ) 2
T x T x
t
2 2 2
1 1
SE ( ˆ ) s s
( xt x ) 2
where s is the estimated standard deviation of the residuals. t
x 2
T x 2
2
• We could estimate this using the average of ut :
1
s2
T
ut2
• Unfortunately this is not workable since ut is not observable. We can use
the sample counterpart to ut, which is ût : 1
s2 uˆt2
T
But this estimator is a biased estimator of 2.
T 2
where uˆ 2
t is the residual sum of squares and T is the sample size.
2. The sum of the squares of x about their mean appears in both formulae.
The larger the sum of squares, the smaller the coefficient variances.
y
y
y
y
0 x x 0 x x
3. The larger the sample size, T, the smaller will be the coefficient
variances. T appears explicitly in SE( ) and implicitly in SE( ).
• SE(regression), s uˆ t2
130.6
2.55
T 2 20
3919654
SE ( ) 2.55 * 3.35
22 3919654 22 416.5 2
1
SE ( ) 2.55 * 0.0079
3919654 22 416.5 2
• We now write the results as
yˆ t 59.12 0.35 xt
(3.35) (0.0079)
• We can use the information in the sample to make inferences about the
population.
• We will always have two hypotheses that go together, the null hypothesis
(denoted H0) and the alternative hypothesis (denoted H1).
• The null hypothesis is the statement or the statistical hypothesis that is actually
being tested. The alternative hypothesis represents the remaining outcomes of
interest.
• For example, suppose given the regression results above, we are interested in
the hypothesis that the true value of is in fact 0.5. We would use the notation
H0 : = 0.5
H1 : 0.5
This would be known as a two sided test.
• There are two ways to conduct a hypothesis test: via the test of
significance approach or via the confidence interval approach.
• Since the least squares estimators are linear combinations of the random variables
i.e.
wt yt
• The weighted sum of normal random variables is also normally distributed, so
N(, Var())
N(, Var())
• What if the errors are not normally distributed? Will the parameter estimates still
be normally distributed?
• Yes, if the other assumptions of the CLRM hold, and the sample size is
sufficiently large.
ˆ ˆ
~ N 0,1 and ~ N 0,1
var var
ˆ ˆ
~ tT 2 and ~ tT 2
SE (ˆ ) ˆ
SE ( )
3. We need some tabulated distribution with which to compare the estimated test
statistics. Test statistics derived in this way can be shown to follow a t-
distribution with T-2 degrees of freedom.
As the number of degrees of freedom increases, we need to be less cautious in
our approach since we can be more sure that our results are robust.
f(x)
95% non-rejection
region 5% rejection region
f(x)
7. Finally perform the test. If the test statistic lies in the rejection
region then reject the null hypothesis (H0), else do not reject H0.
• You should all be familiar with the normal distribution and its
characteristic “bell” shape.
• We can scale a normal variate to have zero mean and unit variance by
subtracting its mean and dividing by its standard deviation.
normal distribution
t-distribution
• The reason for using the t-distribution rather than the standard normal is that
we had to estimate ,2the variance of the disturbances.
3. Use the t-tables to find the appropriate critical value, which will again have T-2
degrees of freedom.
5. Perform the test: If the hypothesised value of (*) lies outside the confidence
interval, then reject the null hypothesis that = *, otherwise do not reject the null.
‘Introductory Econometrics for Finance’ © Chris Brooks 2013 50
Confidence Intervals Versus Tests of Significance
• Note that the Test of Significance and Confidence Interval approaches always give the
same answer.
• Under the test of significance approach, we would not reject H0 that = * if the test
statistic lies within the non-rejection region, i.e. if
*
t crit t crit
SE ( )
• Rearranging, we would not reject if
t crit SE ( ˆ ) ˆ * t crit SE ( ˆ )
• But this is just theˆrule
under
t the ( ˆ ) interval
SEconfidence * ˆapproach.
t SE ( ˆ )
crit crit
yˆ t 20.3 0.5091xt
, T=22
(14.38) (0.2561)
• Using both the test of significance and confidence interval approaches,
test the hypothesis that =1 against a two-sided alternative.
• The first step is to obtain the critical value. We want tcrit = t20;5%
f(x)
-2.086 +2.086
‘Introductory Econometrics for Finance’ © Chris Brooks 2013 53
Performing the Test
test stat
* ˆ t crit SE ( ˆ )
SE ( )
05091
. 1 0.5091 2.086 0.2561
1917
.
0.2561 (0.0251,1.0433)
Do not reject H0 since Since 1 lies within the
test stat lies within confidence interval,
non-rejection region do not reject H0
• Note that we can test these with the confidence interval approach.
For interest (!), test
H0 : = 0
vs. H1 : 0
H0 : = 2
vs. H1 : 2
• For example, say we wanted to use a 10% size of test. Using the test of
significance approach,
*
test stat
SE ( )
05091
. 1
1917
.
0.2561
as above. The only thing that changes is the critical t-value.
f(x)
-1.725 +1.725
‘Introductory Econometrics for Finance’ © Chris Brooks 2013 57
Changing the Size of the Test:
The Conclusion
• t20;10% = 1.725. So now, as the test statistic lies in the rejection region,
we would reject H0.
• If we reject the null hypothesis at the 5% level, we say that the result
of the test is statistically significant.
• The probability of a type I error is just , the significance level or size of test we
chose. To see this, recall what we said significance at the 5% level meant: it is only
5% likely that a result as or more extreme as this could have occurred purely by
chance.
• Note that there is no chance for a free lunch here! What happens if we reduce the
size of the test (e.g. from a 5% test to a 1% test)? We reduce the chances of making a
type I error ... but we also reduce the probability that we will reject the null
hypothesis at all, so we increase the probability of a type II error:less likely
to falsely reject
Reduce size more strict reject null
of test criterion for hypothesis more likely to
rejection less often incorrectly not
reject
• So there is always a trade off between type I and type II errors when choosing a
significance level. The only way we can reduce the chances of both is to increase the
sample size.
‘Introductory Econometrics for Finance’ © Chris Brooks 2013 61
A Special Type of Hypothesis Test: The t-ratio
• Suppose that we have the following parameter estimates, standard errors and
t-ratios for an intercept and slope respectively. 1 2
Coefficient 1.10 -4.40
SE 1.35 0.96
t-ratio 0.81 -4.63
• If we reject H0, we say that the result is significant. If the coefficient is not
“significant” (e.g. the intercept coefficient in the last regression above), then
it means that the variable is not helping to explain variations in y. Variables
that are not significant are usually removed from the regression model.
•y In practice there are good statistical reasons for always having a constant
t
even if it is not significant. Look at what happens if no intercept is included:
x
t
‘Introductory Econometrics for Finance’ © Chris Brooks 2013 64