Does Class Attendance Affect Academic Performance? Evidence From "D'Annunzio" University

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Does Class Attendance Affect Academic

Performance? Evidence from “D’Annunzio”


University
Vincenzo Andrietti∗ Rosaria D’Addazio†
January 30, 2012

Abstract
We analyze data from students enrolled in an Introductory Macroe-
conomics course at "D’Annunzio” University (Italy) in the 2004-2005
academic year to assess the impact of attendance on academic per-
formance. Using "proxy variables" regressions to capture the effect
of unobservable factors possibly correlated with attendance, we still
find a positive and significant effect of attendance. However, when
using panel data fixed effect estimators to eliminate time-invariant
individual specific unobservables, the effect disappears. This suggests
that the positive effect of attendance commonly reported in the litera-
ture may still be capturing the impact of unobservables on academic
performance.

JEL classification: A22, I21.


Keywords: Attendance, Performance, University.


Università degli Studi “D’Annunzio” di Chieti e Pescara. E-mail: vandriet@unich.it

Final year B. Sc. student in Economics at Università degli Studi “D’Annunzio”

1
1 Introduction
The economics of education literature has long sought to identify the
determinants of student performance in university economics courses.1 A
relatively recent strand of this literature has focused on the role of class
attendance in determining student learning outcomes. Since Romer’s (1993)
seminal article, a number of studies have found positive effects of attendance
on performance, leading some authors to call for policies to increase or even
mandate class attendance. The extent to which these results are robust and
generalizable is not, however, entirely clear, since most studies leave the
two main problems usually affecting the attendance rate variable unresolved.
First, self-reported attendance rates are likely to contain measurement errors,
inducing attenuation bias into estimated coefficients. Second, attendance
rates are potentially endogenous, given that students’ choice of whether
or not to attend lectures is positively affected by unobservable individual
characteristics, such as ability, effort, and motivation, which are also likely
to have a positive effect on performance. If the latter were the case, the
estimated coefficients would suffer from endogeneity (upwards) bias.
This paper addresses both of these issues through the collection of a novel
data set that matches survey data with administrative student records. First,
careful attendance monitoring in each class session is used to ensure accurate
measurement of attendance rates. Second, proxy regressions and panel data
estimators are used to account for attendance rate endogeneity. Our findings
are consistent with the hypothesis that the inclusion of proxy variables is not
sufficient to capture all the correlation between the regressor of interest and
unobservable ability, effort, and motivation. The bias correction obtained
using "OLS proxy regressions" goes in the expected direction, although the
effect of attendance remains positive and significant. However, when we
account for time-invariant student unobservables possibly correlated with
attendance by means of panel data estimators, we find that class attendance
does not have an impact on performance. This finding seems to confirm what
most instructors recognize: better students attend lectures more frequently
1
See Becker et al. (1990), for an overview of this literature.

2
on average, and because of this inherent motivation, they also receive higher
grades. In this context, the implementation of incentive schemes at universities
aimed at fostering student attendance may have undesirable effects on student
learning outcomes.
The remainder of this work is organized as follows. Section 2 reviews the
literature. Section 3 describes the data. Section 4 illustrates the empirical
strategy. Section 5 presents and discusses the results. Section 6 concludes.

2 Literature
In his widely cited paper “Do students go to class? Should they?”, Romer
(1993) provided the first analysis of the relationship between lecture attendance
and exam performance.2 Using attendance records in six sessions of his large
(n = 195) Intermediate Macroeconomics course, he found that attendance
had a positive and significant impact on academic performance. On the
basis of these findings, Romer recommended experimenting with mandatory
attendance policies to enhance student performance.
Following on Romer’s (1993) seminal paper, several studies have attempted
to measure the impact of attendance on learning outcomes. Durden and
Ellis (1995) used students’ self-reported number of absences to explore the
relationship between absenteeism and academic achievement in several sections
(n = 346) of a Principles of Economics course. Controlling for student
differences in background, ability, and motivation, they found a nonlinear
effect of attendance: while a few absences do not lead to lower grades,
excessive absenteeism does. Using data on a sample of about 400 Agricultural
Economics students at four large US universities, Devadoss and Foltz (1996)
found that, after taking into account motivational and aptitude differences
across students, the difference in exam performance between a student with
perfect attendance and a student attending only half of the classes was, on
2
Earlier studies, including McConnell and Lamphear (1969), Paden and Moyer (1969),
Buckles and McMahon (1971), Schmidt (1983), Park and Kerr (1990), and Browne et al.
(1991), provided conflicting evidence on the effect of attendance among the determinants
of academic performance.

3
average, a full letter grade. A positive and significant relationship between
class attendance and academic performance was also found by Chan, Shum,
and Wright (1997) and, more recently, by Rodgers (2002).
Among the cross-sectional studies that have reached less robust conclusions
about the positive effect of attendance on performance are Douglas and Sulock
(1995), Bratti and Staffolani (2002), Dolton, Marcenaro, and Navarro (2003),
and Kirby and McElroy (2003). In particular, using a sample of (n = 371)
first-year Economics students at an Italian university, Bratti and Staffolani
(2002) found that the positive and significant effect of class attendance on
performance is not robust to the inclusion of self-study hours.
Two recent strands of the literature exploit the availability of richer
data sets including repeated observations of the same students’ responses
to different questions, as well as different students’ responses to different
questions. The use of panel data models makes it possible to control for
time-invariant characteristics of both students and exam questions. In the first
of these two strands, Marburger (2001, 2006), Lin and Chen (2006) and Chen
and Lin (2008) built original data sets, matching students’ absence records
with teachers’ records of the class sessions when the material corresponding
to midterm exam questions was covered. Marburger (2001, 2006) estimated
a probit model in which the probability of a student responding incorrectly
to each question in a set of multiple-choice questions was related to the
student’s attendance at the lecture when the relevant material was covered.
He found that absenteeism increases the probability of an incorrect response
by 7 to 14 percent. Lin and Chen (2006) and Chen and Lin (2008) took
a different approach, measuring performance as a dummy variable taking
the value one for correct answers and attendance as a dummy variable
taking the value one if the student attended the lecture where the question
material was covered. Results from probit regressions including fixed effects for
students and questions indicate that attendance on a specific day significantly
increases the likelihood of responding correctly to a question based on the
material covered that day, thus suggesting a positive relationship between
attendance and academic performance. In the second strand of the literature,
Rodgers (2001, 2003), Cohn and Johnson (2006) and Stanca (2006) simply

4
exploited variation in attendance and academic performance (both measured
in percent) over different midterm exams, using panel data estimators to
account for time-invariant individual heterogeneity. Their findings indicate
that fixed effect estimators are preferable and that attendance has positive and
significant effects on performance, ranging from 0.04 to 0.15 percentage points
of performance for each additional percentage point of attendance. Following
this latter strand of the literature, we estimate the effect of attendance on
academic performance using panel data on Introductory Macroeconomics
students from an Italian university.

3 Data
The data used in this study were collected by conducting a survey among
undergraduate students in Economics attending an Introductory Macroeco-
nomics course at “G. D’Annunzio” University of Chieti and Pescara (Italy)
in the fall semester of the 2004-2005 academic year. The course was delivered
in three two-hour lectures per week over a twelve-week period to second-year
students enrolled in the Economics and Commerce (CLEC) and Environ-
mental Economics (CLEAM) degree programs.3 Survey data were collected
through an initial and a follow-up questionnaire, and later matched with
administrative student records. At the beginning of the course, students were
told by their instructor that attendance rates would not affect their final
grade. Attendance was monitored at the beginning and at the end of each
class session. This allows us to construct attendance rates that do not suffer
from measurement error, as is the case in most of the previous literature,
where attendance rates are either self-reported4 or taken during a sample
period.5 Enrolled students were offered the opportunity to have their final
grade calculated as the average of a first and a second midterm exam, each
covering an equal fraction of the course and each carrying the same weight in
3
The course was also taken by a few students enrolled in other degrees, as well as by
third- and fourth-year students.
4
Durden and Ellis (1995), Stanca (2006)
5
Romer (1993), Rodgers (2001), Rodgers (2003).

5
the final grade. The first midterm was held in the seventh week of the course
and covered materials taught in weeks 1-6. The second midterm was held in
week 14 and covered materials taught in weeks 8-13. Alternatively, students
could take a single comprehensive final exam.
In order to work with a more homogeneous sample and – most importantly
– to exploit the panel nature of the data, we focused on the students who
chose to take midterm exams.6 The sample used in the empirical analysis is
an unbalanced panel of 144 observations, with valid information on midterms
scores and individual characteristics.7 Descriptive statistics are presented in
Table 2. Academic performance, our dependent variable, was measured by
midterm exam scores. Actual test scores ranged from 0 to 30, with 18 as the
passing grade, and were rescaled to the 0-100 range for ease of interpretation
and comparability with the results reported in the literature. The overall
average rescaled score in the full sample was 78.9 percent, with a significant
increase from 77.8 in the first midterm to 85.2 in the second midterm. A
typical student attended, on average, 79.9 percent of the classes, a figure that
is well above the figures reported in Romer and Stanca (67 and 71 percent,
respectively). Average class attendance decreased slightly – from 80.1 to
78.1 percent – over the two midterm periods. Ability was proxied by two
indicators of past performance – high school grade (HSG) and grade point
average (GPA) – both measured on a 60-100 points scale, and by the average
number of credits completed per year since registration – CFU per year – as
an indicator of the speed in course work completion. The average GPA and
HSG values – 84.98 and 85.79, respectively – were also considerably higher
6
If the decision to take midterms rather than a final exam was based on unobservables
correlated with the error term in the performance equation, a selection problem would
arise. Table 1 reports descriptive statistics for midterms versus final exam takers. Despite
the observed differences (notably related to attendance rates and exam scores), we found
no evidence of selection on unobservables based on the estimation of a two-step Heckman
sample selection model. Estimation results are available upon request.
7
From a potential balanced panel of 162 observations (N = 81, T = 2) based on
students taking the first midterm and with nonmissing values in observable characteristics
relevant for the empirical analysis, 18 observations were dropped for students that did not
take the second midterm. Again, we estimated a Heckman sample selection model for the
academic performance observed in the second midterm, and could not reject the hypothesis
of no correlation among errors in the performance and selection equations. Estimation
results are available upon request.

6
than those reported by Stanca (76.86 and 77.24, respectively). Effort was
proxied by two variables: hours of self-study during the lecture term and hours
of self-study during the lecture-free week prior to midterms, both measured as
average weekly hours. As we would expect, the average time that students
dedicated to self-study rose from 10.67 hours per week during the lecture
period – a similar figure to the one reported by Stanca (10.85) – to 19.92 hours
in the lecture-free week prior to midterms. Motivation was measured, on a
0-100 percent scale, by four student self-reported variables – subject difficulty,
attendance benefits, subject interest, and teaching evaluation – aiming to
capture information on the match between academic and student inputs, and
therefore the suitability of the student for the subject.8 On average, students
reported a high interest in their evaluations of the subject and the teaching
(82 and 89.7 percent, respectively). The set of control factors used in the
empirical analysis also includes demographic variables (age, female, siblings,
living away from family), family background variables (dummies for parents’
education), and student characteristics (second year, CLEAM, other courses)
as well as a dummy variable indicating the second midterm.

4 Empirical Strategy
Our goal is to specify and estimate an appropriate education production
function (EPF) explaining academic performance in terms of class attendance
rate, all other things being equal. According to the EPF approach,9 a basic
learning model can take the following form:

yi = β0 + β1 xi1 + β2 xi2 + ui i = 1, 2, . . . , n (1)

where yi is the learning outcome for individual i, measured by academic


performance (exam score), xi1 is academic input, measured by class attendance,
xi2 is a vector of student inputs, and ui is an error term containing all other
factors that affect learning.
8
See Stanca (2006).
9
See, among others, Lazear (2001) and Todd and Wolpin (2003).

7
Student inputs include observable student characteristics, such as study
habits and family background, as well as variables that are not directly
observable, such as ability, effort, and motivation. Since the latter are all
likely to be positively correlated with the students’ propensity to attend class
and with academic performance, excluding them from the model because of
their unobservability would make the OLS estimator of β1 upwardly biased
and inconsistent.
Our empirical strategy exploits two different econometric approaches
to account for possible endogeneity of class attendance. The first approach
consists in finding appropriate proxy variables for unobservable student inputs.
Consider a population model:

yi = β0 + β1 xi1 + β2 x∗i2 + ui , (2)

in which (x∗i2 ) is unobserved, and suppose a proxy variable (xi2 ) is available


for x∗i2 , where:
x∗i2 = δ0 + δ2 xi2 + ηi . (3)

If ηi is uncorrelated with xi1 , a plug-in regression of yi on xi1 and xi2 would


result in unbiased and consistent estimates of the class attendance parameter
β1 . In our analysis, ability is proxied by HSG and GPA, effort by hours
of self-study, motivation by subject difficulty level and interest, attendance
benefits, and teaching evaluation.
Proxy variables may be difficult to find in practice, and/or the ones
available may not capture all of the correlation between the unobserved
factors (student inputs) and the regressor of interest (class attendance). An
alternative possibility to address the potential endogeneity of class attendance
is to exploit the variability of attendance and performance within observational
units over time. The availability of panel data, where the same n cross-
sectional units are observed at two or more time periods, makes it possible,
under certain assumptions, to eliminate the effect of unobservable variables
that differ across units but are constant over time.

8
Consider a linear population model of the form:

yit = β0 + β1 xi1t + β2 xi2 + ci + uit , t = 1, 2, . . . , T (4)

where yit denotes the time-varying dependent variable (academic performance);


xi1t is the time-varying explanatory variable of interest (class attendance), xi2
is a time-invariant regressor, ci is a time-constant unobserved heterogeneity
potentially correlated with the regressors, and uit is the idiosyncratic error
component, uncorrelated with (xi1t , xi2 , ci ). The fixed effect (FE) estimator is
based on the assumption that ci represents time-invariant effects, potentially
correlated with regressors, that can be eliminated by subtraction of the
corresponding model for individual means:

(yit − ȳi ) = β1 (xi1t − x̄i ) + (uit − ūi ). (5)

The FE estimator is unbiased and consistent, as long as the explanatory


variables are strictly exogenous. The drawbacks are the lack of efficiency and
the loss of all time-constant regressors, either observed and unobserved.
To address these issues, we estimate a random effects (RE) model based
on the assumption that the time-invariant unobserved heterogeneity term – ci
– is uncorrelated with the regressors.10 The RE estimator can be obtained as
the OLS estimator from data transformed in quasi-deviations from individual
means:

yit − λȳi = β0 (1 − λ) + β1 (xi1t − λx̄i1 ) + β2 (xi2 − λx̄i2 ) + (εit − λεi ) (6)

where:  1/2
σ2
λ=1− 2 u 2  .
(σu + T σc )

Under the assumption of orthogonality between unobserved individual hetero-


10
Under this assumption, a pooled OLS regression could also be used to obtain a
consistent estimator of β1 . However, the pooled estimator would not be efficient, since the
composite errors εit = ci + uit are serially correlated due to the presence of ci in each time
period.

9
geneity and explanatory variables, the RE estimator is consistent and efficient
and makes it possible to estimate the effect of observed time-invariant indi-
vidual characteristics. However, the restriction imposed by the orthogonality
assumption also represents the main limitation of the RE estimator. In partic-
ular, the choice between FE and RE hinges on the validity of the assumption
about the relationship between the unobserved individual heterogeneity and
the regressors included in the model.
In the next section, we report and discuss results from estimation of FE
and RE models on an unbalanced panel of 144 observations. The assumption
of orthogonality between unobserved time-invariant effects and regressors is
tested using an Hausman test.11

5 Results
Our empirical analysis focused on two different econometric approaches.
First, we estimated alternative specifications of our learning model (1) by
OLS-proxy regression. Table 3 presents the point estimates for the full sample.
Attendance is found to have a positive and statistically significant effect on
performance in all models. In the basic univariate specification (column 1),
the point estimate indicates that one additional percentage point of class
attendance increases test scores by 0.21 percentage points. The addition of a
set of controls for individual characteristics (column 2) slightly increases the
estimated attendance coefficient to 0.22. Next, we considered how controlling
for unobservable factors, such as ability, effort, and motivation, affected the
estimated coefficient for attendance. The addition of the set of ability proxy
(column 3) slightly reduces the estimated effect of attendance to 0.21. All the
ability proxies are found to have a positive and statistically significant effect
on academic performance. Moreover, there is a considerable improvement in
11
The Hausman test is based on the difference between RE and FE estimators. Under the
null hypothesis, the unobserved time-invariant effect is uncorrelated with the explanatory
variables and RE is consistent as well as efficient. Under the alternative hypothesis, RE
is inconsistent. Alternatively, FE is consistent under both hypotheses. A statistically
significant difference between the estimators would lead to rejection of the null, suggesting
FE as a better choice.

10
the model’s capacity to explain the variation in the dependent variable (R2
rises from 0.18 to 0.43). Alternatively, introducing only the set of proxies
for effort and motivation reduces the effect of attendance on performance to
0.15. However, none of the proxies is now statistically significant at standard
levels, and the model R2 is reduced considerably. Focusing on the complete
specification (column 5), where both set of proxies are added, the attendance
coefficient is reduced to 0.19, while the R2 rises to 0.46. Given that each
lecture represents about 5.5% of total attendance during the midterm,12 the
point estimate resulting from the complete specification implies that the
gross return of attending an additional two-hour lecture would on average
be a percentage point in terms of midterm score.13 This also implies that
a typical student with a 100% attendance rate would obtain on average a
score 9 percentage points higher14 than a student attending only 50% of the
classes, all else equal.
Among the ability proxies, CFU per year and GPA remain significant at
standard levels: an additional percentage point of GPA increases performance
by about 1 percent, while an additional CFU per year increases performance
on average by 0.4 percent. Among the proxies for effort and motivation, only
teacher evaluation is statistically significant at standard levels, each additional
percentage evaluation point being on average associated with a 0.28 percent
increase in academic performance. Among the remaining control variables,
only the dummy indicating the second midterm test – which aims to capture
heterogeneity among midterms – and the variable indicating the number of
siblings are economically and statistically significant across all specifications.
The results in Table 3 are consistent with the findings of Romer (1993) in
suggesting that ability is positively related to both attendance and perfor-
mance. If this is the case, attempting to control for the effect of unobserved
ability becomes crucial when estimating the effect of attendance on perfor-
mance. Alternatively, the findings indicate that controlling for effort and
motivation has a lower impact on the estimated coefficient for attendance. A
12
18 two-hours lectures were offered in each of the two midterms.
13
Stanca (2006) obtained a similar return on attendance (0.91 percentage points) from
cross-sectional OLS estimates.
14
(0.18*0.05)=0.09

11
different, more plausible interpretation is that, despite the introduction of
a set of control variables, the relationship still reflects the effect of omitted
factors correlated with regressors. We therefore attempted an alternative
means of addressing this issue, exploiting the variability of attendance and
performance in the time dimension. We used an unbalanced panel of 144
observations, collected by recording grades on two midterm tests and at-
tendance levels in the related fractions of the course for all the students
taking the midterms, to perform a panel data analysis. The results of fixed
effects (FE) and random effects (RE) model estimations are reported in
Table 4. The RE estimator confirms a positive effect of attendance. In the
complete specification, the effect of attendance is slightly reduced to 0.18,
but still significant at 5%. However, the results based on the FE estimator,
accounting for possible correlation between attendance and time-invariant
individual-specific unobservables, show that the impact of class attendance is
both economically and statistically not significant. This suggests a persisting
positive correlation between unobserved effects and time-varying regressors,
even after controlling for ability, effort, motivation, and other individual
characteristics. The Hausman test statistic strongly rejects (p = 0.00) the
null hypothesis of orthogonality between unobservable characteristics and
regressors, therefore confirming FE as the valid estimator.
On the basis of these findings, our answer to the question “does class
attendance help to improve learning outcomes?” would be negative, and at
odds with the results provided by most of the recent literature. Before drawing
any firm conclusion, however, we should at least try to understand the possible
reasons that may have led our evidence to differ from the results of other recent
panel data studies. The most obvious possibility deals with the differences
between instructors and testing instruments. The attendance/performance
effect is less likely to be exhibited if the instructor is a relatively weak lecturer
or tends to follow the book religiously during lectures. It is also less likely to
appear if the testing instrument emphasizes material that can be adequately
captured simply by reading the textbook (i.e., memorization or lower-level
critical thinking), thereby reducing the value-added from class attendance.
We address both of these possible causes by looking again at the descriptive

12
statistics reported in Table 2. In particular, we find that the self-reported
variables measuring the benefit of attendance and the teacher evaluation are
very high (over 90 percent). This should indicate, on the one hand, that
students have a high opinion of the instructor and, on the other hand, that
students consider lectures to be beneficial in terms of the value-added provided
in them.
Another possible cause of our divergent results is sample selection, given
that we use a sample that includes only the subset of students that chose to
take midterm exams. We addressed this issue as well by estimating sample
selection models accounting first for the decision to take the first midterm and
then for the decision to take the second midterm, conditional on having taken
the first midterm. In both cases, we could not reject the null hypothesis of
no correlation among the errors in the performance and selection equations.
Another possibility lies in the inherent difficulty in using panel data for
a study of this nature. Panel data are generally used to eliminate the effect
of unobservable variables that differ across individuals but are constant over
time. This assumption may, however, be unreasonable in our case given that
some unobserved effects (i.e., motivation and effort) may vary over time,
depending on the result in the first midterm test. Moreover, even proxy
variables that were meant to capture such effects – like study hours, teaching
evaluation, and subject interest, which are time-varying by definition – were
collected only once, and thus had to be treated as time-constant variables.15
Despite these data limitations, our findings are supported by similar results
in Andrietti et al. (2011), after accounting for individual time-variation in
study time (and other motivation/effort proxies) in FE estimations.
The last – and perhaps most important – reason that could lead to
insignificant FE results would be the lack of time-variation within units
of observations:16 in particular, the within-student variation of attendance
15
Consistently with the students’ time allocation theories posited by Becker et al
(1990), Krohn and O’Connor (2005) found that self-reported study hours in intermediate
macroeconomics were correlated with their performance on the midterm and that the
students re-allocated their study time away from the course if they performed well on the
midterm.
16
Small sample size could be a further cause of imprecise coefficient estimation.

13
may simply not be large enough (i.e., most students might have similar
attendance patterns in the periods before and after the first midterm exam)
for identification purposes. A small within-variation of attendance might be
highly correlated with the FE individual-specific constant terms, and would
determine unstable estimates. In order to assess this possibility, we proceed to
a further data check. Based on the figures reported in Table 5 on the overall
and within standard deviation for our dependent variable (score) and our
most relevant independent variable (attendance), it seems that there should
be enough variation in the data to identify the impact of attendance on
performance using panel data methods. On the basis of the above discussion,
we believe that the data used in this paper contain enough information to
answer our research question and that our findings – although at odds with
most of the previous literature – call at least for further research on the
important topic of how class attendance affects academic performance.

6 Conclusions
Although continuous evaluation of student learning is among the princi-
ples underlying the European Space of Higher Education (Bologna Process),
evidence about the effect of class attendance on academic performance is
lacking for most European Union countries. This is partly due to the lack
of adequate data and partly due to methodological problems. This analysis
represents a first step towards filling this gap. Using new data that combine
different sources of information and regression proxy techniques, we find a
significant effect of lecture attendance on academic performance. However,
when we account for time-invariant student unobservables possibly correlated
with attendance by means of panel data estimators, we find that class atten-
dance does not have an impact on performance. This finding, despite the
caveats emerging from the discussion in the previous section, confirms what
most instructors recognize: better students attend lectures more frequently
on average, and receive higher grades because of this inherent motivation.
In this context, university policies based on incentive schemes designed to
foster student attendance may have neutral or even undesirable effects on

14
student learning. If attendance is correlated with ability and motivation,
as our findings suggest, it is unlikely that instructors can improve student
achievement by changing the course structure or by establishing mandatory
attendance policies. Under this assumption, unmotivated students forced to
attend lectures are unlikely to pay attention or participate and therefore gain
minimally from such policies. In interpreting our results, it is important to
recognize that the sample used here is limited to students taking an Introduc-
tory Macroeconomics course during a semester in one instructor’s class at a
single institution. This suggests caution, especially in view of the fact that
the results are at least partially at odds with many of the findings reported
in the literature. Different conclusions might be found over time and space,
or when alternative statistical approaches are used. Further replication is
needed.

15
References
Andrietti , V., D’Addazio, R. and Velasco Gomez, C. (2011). Class attendance
and economic performance among Spanish economics students. Mimeo,
Universidad Carlos III de Madrid.

Becker, W., W. Greene, and S. Rosen (1990). Research on high school


economic education, Journal of Economic Education, 80 (2), 14–22.

Bratti, M. and Staffolani, S. (2002). Student time allocation and educational


production functions, Quaderni di ricerca, Dipartimento di Economia -
Universita’ di Ancona, no. 170.

Browne, N. M., Hoag, J., Wheeler, M. V. and Boudreau, N. (1991). The


impact of teachers in economic classrooms, Journal of Economics, 17, 25–30.

Buckles, S.G. and McMahon, M.E. (1971). Further evidence on the value
of lecture in elementary economics, Journal of Economic Education, 2(2),
138–141.

Chan, K. C., Shum, C. and Wright, D. J. (1997). Class attendance and stu-
dent performance in principles of finance, Financial Practice and Education,
7(2), 58-65.

Chen, J. and Lin, T. F. (2008). Class attendance and exam performance: a


randomized experiment. Journal of Economic Education 39(3), 213-227.

Cohn, E. and Johnson E. (2006). Class attendance and performance in


Principles of Economics, Education Economics 14(2), 211-233.

Devadoss, S. and Foltz, J. (1996). Evaluation of factors influencing stu-


dent class attendance and performance, American Journal of Agricultural
Economics, 78(3), 499-507.

Douglas, S. and Sulock, J. (1995). Estimating educational production func-


tions with correction for drops. Journal of Economic Education 26(2), 101-
112.

16
Durden, G. C. and Ellis, L. V. (1995). The effects of attendance on student
learning in principles of economics, American Economic Review Papers and
Proceedings, 85(2), 343-346.

Kirby, A. and McElroy, B. (2003). The effect of attendance on grade for


first-year Economics students in University College Cork, The Economic
and Social Review, 34(3), 311-326.

Krohn, J. B. and O’Connor, C. M. (2005). Student effort and performance


over the semester. Journal of Economic Education 36(1), 3-28.

Lazear, E. P. (2001). Educational production, Quarterly Journal of Eco-


nomics, 116(3), 777-803.

Lin, T. F. and Chen, J. (2006). Cumulative class attendance and exam


performance. Applied Economics Letters 13(14), 937-42.

McConnell, C. R. and Lamphear, C. (1969). Teaching Principles of Economics


without lectures, Journal of Economic Education, 1(4), 20–32.

Paden, D.W. and Moyer, M.E. (1969). The effectiveness of teaching methods:
the relative effectiveness of three methods of teaching Priciples of Economics,
Journal of Economic Education, 1, 33–45.

Marburger, D. R. (2001). Absenteeism and undergraduate exam performance,


Journal of Economic Education, 32(2), 99-109.

Marburger, D. R. (2006). Does mandatory attendance improve student


performance?, Journal of Economic Education, 37(2), 148-55.

Park, K. H. and Kerr, P. M. (1990). Determinants of academic performance:


a multinomial logit approach, Journal of Economic Education, 21(2), 101-
111.

Rodgers, J. R. (2001). A panel-data study of the effect of student attendance


on university performance, Australian Journal of Education, 45(3), 284-295.

17
Rodgers, J. R. (2002) Encouraging tutorial attendance at university did not
improve perforance, Australian Economic Papers, 41(3), 255-266.

Rodgers, J. R. and Rodgers J. L. (2003). An investigation into the academic


effectiveness of class attendance in Intermediate Microeconomic Theory class,
Education Research and Perspectives, 30 (1), 27-41.

Romer, D. (1993). Do students go to class? Should they?, Journal of


Economic Perspectives, 7(3), 167-174.

Schmidt, R.M. (1983). Who maximizes what? A study in student time


allocation, The American Economic Review, 73(2), 23-28.

Stanca, L. (2006). The Effects of Attendance on Academic Performance:


Panel Data Evidence for Introductory Microeconomics, Journal of Economic
Education, 37(3), 251-266.

Todd, P.E. and Wolpin, K.I. (2003). On the specification and estimation of
the production function for cognitive achievement, The Economic Journal,
113, F3-F33.

18
Table 1: Descriptive Statistics. Midterms vs. Final Exam Takers

Variable Source Totals Midterms Final


(n = 116) (n = 81) (n = 35)
Mean Std. Dev. Mean Std. Dev. Mean Std. Dev.
Score (%) admin 74.04 21.97 79.77 17.14 60.76 26.10
Attendance (%) survey 59.02 37.13 77.34 23.02 16.63 27.73
Attendance Midterm 1 (%) survey 60.83 37.32 78.70 22.98 19.46 30.76
Attendance Midterm 2 (%) survey 56.61 39.01 75.51 26.56 12.86 26.15
Female admin 0.55 0.50 0.60 0.49 0.43 0.50
Age admin 23.40 4.20 22.55 2.69 25.34 6.10
Divorced Parents survey 0.06 0.24 0.06 0.24 0.06 0.23
Siblings survey 1.45 0.82 1.42 0.82 1.51 0.82
Living Away from Home survey 0.43 0.50 0.49 0.50 0.28 0.46
Father Graduate survey 0.15 0.35 0.16 0.37 0.11 0.32
Mother Graduate survey 0.15 0.36 0.13 0.34 0.20 0.40
Second Year admin 0.47 0.50 0.59 0.49 0.20 0.40
CLEAM admin 0.23 0.42 0.23 0.43 0.23 0.43
Other Courses admin 0.09 0.28 0.06 0.24 0.14 0.35
CFU per Year admin 15.87 9.91 16.99 9.69 13.27 10.06
High School Grade admin 83.40 13.45 84.81 13.39 80.11 13.21
Grade Point Average admin 81.94 14.97 84.54 7.05 75.91 24.23
Hours of Study during Lectures Weeks survey 9.71 7.46 10.33 7.60 8.26 7.01
Hours of Study in Lecture-Free Weeks survey 20.17 14.25 20.23 13.77 20.03 15.51
Subject Difficulty Level survey 75.61 13.92 76.25 12.30 74.14 17.21
Attendance Benefits survey 91.08 13.19 90.84 14.12 91.63 10.89
Subject Interest survey 80.51 16.76 81.04 17.11 79.28 16.09
Teaching Evaluation survey 87.99 12.54 88.75 12.25 86.23 13.19

19
Table 2: Descriptive Statistics. Unbalanced Panel Pooled Sample

Variable Mean Std. Dev. Min. Max. n


Avg. Score Midterms (%) 81.62 15.26 20 100 144
Score Midterm 1 (%) 77.81 18.21 23.33 100 144
Score Midterm 2 (%) 85.24 14.88 33.33 100 126
Attendance Midterms (%) 79.92 21.87 0 100 144
Attendance Midterm 1 (%) 80.12 21.17 0 100 144
Attendance Midterm 2 (%) 78.12 24.18 0 100 144
Female 0.6 0.49 0 1 144
Age 22.48 2.65 20 32 144
Divorced Parents 0.06 0.23 0 1 144
Siblings 1.41 0.8 0 4 144
Live Away from Home 0.5 0.50 0 1 144
Father Graduate 0.15 0.35 0 1 144
Mother Graduate 0.14 0.35 0 1 144
Second Year 0.61 0.49 0 1 144
CLEAM 0.21 0.408 0 1 144
Other Courses 0.06 0.24 0 1 144
CFU per Year 17.16 9.84 0 36 144
High School Grade 85.79 12.85 60 100 144
Grade Point Average 84.98 7.07 71.67 98.89 144
Hours of Study during Lecture Weeks 10.67 7.63 0 36 144
Hours of Study in Lecture-Free Weeks 19.92 13.77 0 60 144
Subject Difficulty Level 76.78 12.14 30 95 144
Attendance Benefits 91.33 14.18 10 100 144
Subject Interest 82.07 16.49 6 100 144
Teaching Evaluation 89.71 11.56 50 100 144

20
Table 3: Determinants of Academic Performance: OLS Estimates

Independent Variable OLS OLS OLS OLS OLS


(1) (2) (3) (4) (5)
Attendance 0.21∗∗ 0.22∗∗ 0.21∗∗ 0.15∗ 0.19∗∗
(0.07) (0.07) (0.06) (0.08) (0.06)
Midterm 2 10.67∗∗ 9.83∗∗ 8.17∗∗ 8.96∗∗ 7.74∗∗
(3.04) (2.97) (2.47) (2.88) (2.43)
Female 1.88 −1.25 −0.76 −2.25
(3.18) (2.67) (3.30) (2.81)
Age −1.02 0.83 −0.28 1.46†
(0.85) (0.79) (0.94) (0.87)
Divorced Parents 9.44 12.18† 4.23 7.96
(8.09) (6.72) (8.55) (7.23)
Siblings −3.68† −3.37∗ −4.36∗ −4.01∗
(1.94) (1.62) (1.91) (1.62)
Father Graduate −4.87 −4.22 −4.15 −4.19
(4.87) (4.04) (4.80) (4.04)
Mother Graduate 4.33 5.19 −0.94 1.20
(5.16) (4.33) (5.24) (4.55)
Living Away from Home 3.59 3.11 3.41 2.82
(3.09) (2.65) (3.10) (2.68)
Second Year 2.07 1.09 0.85 0.30
(4.70) (3.91) (4.64) (3.95)
CLEAM −6.73 −9.40∗ −7.26† −10.46∗∗
(4.12) (3.60) (4.19) (3.66)
Other Courses −3.65 −9.66† −2.42 −8.42
(6.75) (5.80) (7.09) (6.18)
CFU per Year 0.32† 0.40∗
(0.18) (0.18)
High School Grade 0.34∗∗ 0.23
(0.12) (0.14)
Grade Point Average 1.03∗∗ 1.04∗∗
(0.20) (0.20)
Hours of Study in Lecture Weeks −0.14 −0.25
(0.24) (0.21)
Hours of Study in Lecture-Free Weeks 0.00 0.10
(0.12) (0.11)
Subject Difficulty Level 0.23 0.17
(0.14) (0.12)
Attendance Benefits 0.15 0.03
(0.14) (0.12)
Subject Interest 0.16 0.04
(0.12) (0.11)
Teaching Evaluation 0.19 0.28∗
(0.15) (0.13)
Adj. R2 0.13 0.18 0.43 0.23 0.46
Note: n = 144.
Significance levels: († ) p < 0.10, (∗ ) p < 0.05, (∗∗ ) p < 0.01.
Standard errors of estimated coefficients are reported in brackets.

21
Table 4: Determinants of Academic Performance. Panel Estimates

Independent Variable FE RE RE
(1) (2)
Attendance −0.02 0.19∗∗ 0.18∗
(0.11) (0.07) (0.07)
Midterm 2 2.48 6.08∗∗ 5.60∗∗
(1.63) (2.08) (1.98)
Hausman test 60.20 46.49
(p−value) (0.00) (0.00)
Note: n = 144.
Significance levels: († ) p < 0.10, (∗ ) p < 0.05, (∗∗ ) p < 0.01.
Standard errors of estimated coefficients are reported in brackets.
RE(2) specification includes the same regressors as OLS(5).

Table 5: Panel Descriptive Statistics

Variable Mean Std. Dev. Std. Dev.


Overall Within
Score (%) 78.90 19.34 6.08
Attendance (%) 79.92 21.87 7.00

22

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy