Nicol Et Al 2014 - 2 PDF
Nicol Et Al 2014 - 2 PDF
Nicol Et Al 2014 - 2 PDF
To cite this article: David Nicol, Avril Thomson & Caroline Breslin (2014) Rethinking feedback
practices in higher education: a peer review perspective, Assessment & Evaluation in Higher
Education, 39:1, 102-122, DOI: 10.1080/02602938.2013.795518
Download by: [UiT Norges arktiske universitet] Date: 16 September 2016, At: 02:00
Assessment & Evaluation in Higher Education, 2014
Vol. 39, No. 1, 102–122, http://dx.doi.org/10.1080/02602938.2013.795518
Introduction
Feedback is a troublesome issue in higher education. Whilst it is recognised as a
core component of the learning process, national surveys, both in the UK (Higher
Education Funding Council for England 2011) and in Australia (James, Krause, and
Jennings 2010), consistently show that students are less satisfied with feedback than
with any other feature of their courses. The natural response to this predicament has
been to put effort into enhancing the quality of the feedback information provided
by teachers, in particular, its promptness, level of detail, clarity, structure and rele-
vance. Well meaning as these interventions are, there is little evidence that they
have had any effect on student satisfaction ratings in national surveys, and, indeed,
there is a growing number of studies now showing that such enhancements of tea-
cher feedback do not result in improved student learning (e.g. Crisp 2007; Bailey
and Garner 2010; Wingate 2010). In addition, such interventions usually require a
features of the text such as the clarity and flow of the argument. Such non-directive
feedback is particularly valuable as it is positively associated with complex repairs
in meaning at the sentence and paragraph level. Thirdly, some researchers maintain
that the receipt of feedback from multiple peers helps sensitise students, as authors,
to different readers’ perspectives (Cho, Cho, and Haker 2010). Such audience
awareness is regarded as important for the development of writing skills.
One feature of peer review that has perhaps not been given adequate recognition
in the research literature is that its implementation allows students, more effectively,
to close the gap between the receipt of feedback and its application. In peer review,
the normal practice is that students produce a draft assignment, receive feedback
from peers and then rework and resubmit the same assignment. Hence they have
opportunities to directly use the feedback they receive. Such structured opportunities
to update the same assignment are rare after teacher feedback, as students usually
move on to the next assignment after receiving such feedback. Seen from this per-
spective, peer review practices might benefit learning, not just because of the quan-
tity and variety of feedback students receive from multiple peers, but also because
the provision and use of feedback are more tightly coupled temporally. In this
respect, peer review practices are especially effective in bringing into play the con-
structivist learning principles advocated by feedback researchers.
from reviewing rather than how they learn. Nonetheless, Cho and MacArthur (2010)
and Cho and Cho (2011) propose three possible interpretations to account for stu-
dents’ learning from reviewing. One interpretation is that reviewing provides students
with opportunities to examine peer texts from the perspective of a critical reader; in so
doing, they develop a better understanding of how readers might interpret the texts
they produce; this, in turn, helps them better monitor and improve their own writing.
Another interpretation is that reviewing brings into play important problem-solving
processes: students must analyse the work of peers, diagnose problems and suggest
solutions. Regular practice in these cognitive processes, it is argued, helps students
learn to produce good quality work themselves. A third interpretation is that reviewers
learn by producing explanations, by generating comments about what makes the work
of peers strong or weak. This interpretation is consistent with the extensive work of
Roscoe and Chi (2008) on peer tutoring, where they propose that the act of construct-
ing explanations for peers leads student-tutors to rehearse, evaluate and hence
improve their own understanding of the topic. Roscoe and Chi (2008) use the term
reflective knowledge building to refer to this ‘explanation’ effect.
These interpretations are interesting, as they not only contribute to the
theoretical shift away from feedback as a ‘telling’ or ‘delivery’ paradigm, but they
also re-frame the way we might view feedback within a constructivist paradigm; in
reviewing, students are not just learning by constructing meaning from feedback
provided by others, rather they are learning by constructing feedback ‘meanings’
themselves (Nicol 2011).
(1) What were students’ experiences of and attitudes towards peer review in gen-
eral?
(2) What were students’ perceptions of the learning benefits associated with the
different components of the peer review process, giving and receiving
feedback, and how did these processes influence their own assignment
productions?
(3) What mental processes did students engage in whilst carrying out reviewing
activities and whilst constructing feedback reviews?
Methodology
The context
This study reports on an implementation of peer review within a first-year engineer-
ing design class at the University of Strathclyde. In that class, which comprised 82
106 D. Nicol et al.
students, the major assignment is to research and design a product. Students must
provide all the information required to enable the manufacture of the product. The
theme for the design in the year of this study was ‘eating and resting in the city’,
and typical designs included seating arrangements, food trays and sandwich boxes.
Students learn about a variety of design processes and methods, from investigation
through to detailed design. The design project starts as a group task with student
teams researching possible designs through desk research, observations and analysis
of products in current use, etc. This process is intended to replicate practice in an
industry context. Each student then individually produces a product design specifi-
cation (PDS) and layout drawings for their own design. A PDS is a complex and
detailed document specifying the requirements and constraints on the product being
designed. A PDS is a core element of the design process and, as such, represents a
fundamental learning outcome for this class. The PDS served as the focus for the
peer review task.
Key features of a PDS include detailed requirements on how the product must
perform, what environment it must operate in, what maintenance is expected, what
materials will be used and details of manufacturing facilities, etc. For this particular
design, students were also asked to include a rationale for key PDS components
and values. Students are given an exemplar of a PDS from another area (in the year
of this study it was a design for a stainless steel hot water cylinder), and they
receive lectures about the importance of a PDS and guidance on its construction.
least for some students, when peer marking is involved. Hence, this study did not
involve students marking or rating other students’ work; instead, it specifically
focused on peer review and feedback rather than peer assessment. In effect, it
sought to identify the effects of peer review without any confounding effects from
peer marking.
The reviewing process was anonymous, in that students providing online feed-
back reviews did not know the identity of those who had produced the work, and
those receiving feedback reviews did not know the identity of the reviewer. Also,
each student’s design was different, although in the same topic domain, so they
could not directly copy ideas from another’s PDS, rather they would have to inter-
pret them. The software PeerMark not only enabled anonymity, but it also meant
that the peer review intervention did not increase the administrative workload of
academic staff. The teacher did not directly award marks for carrying out the peer
review activities, but participation was a stated course requirement. Also, students
were given a mark for ‘professionalism’ in this class (worth 10% of the overall
mark) and, when discussing this in class, the teacher made it clear that participation
in the peer review activities would influence that mark.
The criteria for the peer review activity comprised four questions formulated by
the teacher. Students could see these questions when they accessed each peer
assignment online, and there were boxes in which to type their responses. The
assignments they were asked to review were randomly assigned by the software.
The following are the review criteria/questions:
(1) Do you feel the PDS is complete in the range of headings covered? If not,
can you suggest any headings that would contribute towards the complete-
ness of the PDS and explain why they are important?
(2) Is the PDS specific enough? Does it specify appropriate target values or
ranges of values? Please suggest aspects that would benefit from further
detail and explain.
(3) To what extent do you think the rationale is convincing for the PDS? Can
you make any suggestions as to how it might be more convincing? Please
explain.
(4) Can you identify one main improvement that could be made to the PDS?
Provide reason(s) for your answer.
Evaluation
The evaluation of the peer review activities was carried out in two ways. First, stu-
dents completed an anonymous online survey after the peer review activities had
ended. The survey comprised 21 items and sought information pertinent to all three
research questions – about students’ attitudes towards peer review in general, about
their perceptions of different learning benefits associated with giving and receiving
feedback, and about the mental processes activated by reviewing. Thirteen questions
were of the fixed-response type where students selected an answer or answers or
rated their agreement using a five-point Likert scale. There were also eight
open-ended questions that prompted for further written comments on a previous
fixed-response answer or asked about a specific aspect of the peer review process;
for example, one open-ended question asked students to ‘give examples of what
108 D. Nicol et al.
you learned from providing reviews of others’ work’. The qualitative data deriving
from these open-ended questions usually comprised a phrase or a short sentence or
two. This data were analysed and categorised under common themes relating to the
research questions. Sixty-four out of the 82 students (78%) completed the online
survey and responses to the open-ended questions were high, ranging from 40 to 60
responses per question.
Second, focus groups were held with two groups of six students and with one
pair of students. A single student was also interviewed separately. In this paper, all
these are referred to as the ‘focus groups’ for ease of reporting. These interview
arrangements were pragmatic and resulted from fitting meetings around examina-
tions and on the availability of students. The focus groups and interviews deliber-
ately built on the open-ended survey responses, but were specifically used to gain a
deeper insight into the mental processes involved in reviewing and constructing
feedback – the third research question. The following are typical prompts used by
the researcher to promote discussion regarding that issue:
• How did you go about doing the review of the other students’ work?
• When you were doing it what was going on in your head, can you remem-
ber?
• What was the sequence of steps you took in carrying out the review?
• What were you thinking as you were carrying out the review?
The recorded interviews were transcribed. Responses that elaborated on the find-
ings of the survey were categorised accordingly and additional themes, usually
relating to the production of feedback reviews, that emerged were categorised and
recorded. The procedures used in the analysis have enabled the researchers to tell
the students’ story of their experiences of peer review using their own words. How-
ever, it is recognised that the data collected and the interpretation are driven by the
research questions that informed this investigation.
idea at first but found it to be quite helpful’, ‘not completely comfortable but it was
worthwhile’ and ‘I felt that it was useful but ended up feeling that I had put more
work into my reviews than others’. A number of students maintained that anonym-
ity was important, for example, ‘I felt fine as you didn’t know who was looking at
yours or whose you were looking at’ and ‘glad it was anonymous though’.
In the survey, students were asked to rate the quality of the feedback they
received from peers and the feedback they provided to peers. The quality of the
feedback reviews received was rated as excellent or good by over 53% of the stu-
dents, as of fair quality by 31% and as poor by 13%. Students rated the quality of
the feedback reviews they provided as either excellent or good (65%), as of fair
quality (25%) and as of poor quality (12%). In the focus groups, some students dis-
cussed the poor quality of the feedback reviews they had received, and the lack of
effort made by some reviewers. This was identified as the main limitation of
received feedback by students. When asked how poor quality reviews might be
addressed, students suggested either that ‘it might be better to have more reviews as
then you had a better chance of getting one of good quality’ or ‘lecturers could
mark the review process to address effort issues’.
Interestingly, the students’ positive attitudes in this study contrast with the diffi-
culties and negative attitudes to peer review often reported in the literature (Liu and
Carless 2006; Kaufman and Schunn 2011). In part, this difference might be attrib-
uted to the way in which peer review was presented to students by the teacher and
to the quality of the guidance provided. For example, in the survey almost all stu-
dents (89%) reported positively on the guidance they received. However, what most
notably distinguishes this study from many others is that students were not asked to
mark the work of peers when providing feedback comments. Hence, it is tempting
to conclude that this was the causal factor, as the research shows that it is the mark-
ing component of peer review that causes most dissatisfaction (Kaufman and
Schunn 2011). Some evidence for this assertion comes from the survey where a sig-
nificant proportion of students were unfavourable to the idea of marking. Specifi-
cally, when asked whether it would be worthwhile for students to allocate a mark
for each piece of work as part of the peer review process, students were divided in
their answers with 39% responding ‘yes’ it would be worthwhile, 38% responding
‘no’ and 23% responding ‘don’t know’. In the survey, 47 students also provided
reasons for their answers. Over 50% of these responses were reasons for not having
students award marks; the main reasons were that students did not have enough
expertise to mark and/or were not likely to be accurate or fair (e.g. ‘would not have
enough insight into comparative performances to score’, ‘everyone will have a dif-
ferent standard’, ‘students would be too harsh’). Those agreeing that students should
allocate a mark mainly commented that this would give them a ‘more accurate pic-
ture of how they were doing’. Similar concerns about marking were also raised in
the focus group discussions.
Although more research is required on attitudes to peer review, these findings
suggest that teachers should consider carefully whether to include marking in their
peer review designs.
shows the results from two survey questions that asked students about their learning
from these different processes. The responses to question 7 show that almost all stu-
dents believed that they learned from some aspect of the peer review activity
(93%). However, whilst over half reported that they learned from both giving and
receiving feedback, some reported that they learned only from giving and others
only from receiving feedback. In the latter two categories, more than twice as many
students felt that they learned from receiving rather than from giving feedback.
Question 10 addressed the same issue as question 7, but focused on reports of
behaviour rather than perceptions of learning. Again, the responses indicate that
most students (76%) did indeed learn something from the peer review processes, in
that they reported making modifications to their draught assignment. However, in
contrast to question 7, the responses to question 10 show that, in terms of actions
to make improvements, both giving and receiving feedback were perceived as
equally beneficial.
In the survey, students were also asked to give examples of the actual modifica-
tions they made to their draft PDS based on the peer review activities. Forty-one
students responded to this question. The responses were wide ranging; however, the
following are typical examples:
I included specific materials and changed the formatting of the document so it looked
more professional.
I provided more specific numeric values and expanded my rationale after seeing some-
one else’s PDS and after receiving feedback.
Table 1. Learning from peer review: students’ responses and reported actions to survey
questions 7 and 10 (n = 64).
% [no of
Q7. Which aspect of the peer review activity did you learn from? students]
Giving feedback 11 [7]
Receiving feedback 27 [17]
Both giving and receiving feedback 55 [35]
Neither giving or receiving feedback 8 [5]
Q10. Did you modify your initial assignment as a result of the peer % [no of
review activity to improve it? students]
Yes, as a result of peer review given 23 [15]
Yes, as a result of the peer review received 25 [16]
Yes, as a result of the peer review given and received 28 [18]
No 22 [14]
Not applicable 2 [1]
Assessment & Evaluation in Higher Education 111
These comments show that students do revisit, rethink and update their work as a
result of engaging in peer review activities. They also show that students believe
that the changes they make to their assignments are improvements.
In order to gain deeper insight into the differential effects of receiving and giv-
ing feedback in peer review, students were also asked in the survey to comment on
what they learned from receiving feedback reviews from peers, and what they
learned from providing feedback reviews to peers. These two questions elicited
quite different responses.
Fifty-four students provided comments describing what they learned from
receiving reviews. The majority (63%) reported that receiving reviews from others
either helped highlight specific areas for improvement (e.g. ‘more rationale needed’,
‘I learned to put more numerical data and figures into my PDS’) or that it helped
bring deficiencies in work to their attention (e.g. ‘problems that I didn’t know of
before were highlighted’). Around a quarter of the students (23%) reported that
receiving reviews was valuable, because it helped them appreciate how other read-
ers might interpret their work (e.g. ‘I could see points from others’ viewpoints’,
‘ways in which other students see my work’). A small number of students (5%)
noted that receiving feedback could be motivational (e.g. ‘the person who reviewed
my PDS gave me positive feedback which helped me a lot’), and a small number
(5%) reported that the reviews they received were not valuable (e.g. ‘they weren’t
very good’). These findings are consistent with prior research on the benefits of
feedback receipt from peers (Topping 1998; Cho and MacArthur 2010).
Forty-five students made comments in the survey describing what they learned
from producing reviews. As highlighted above, this process was perceived as confer-
ring quite different learning benefits from receiving reviews. Some students (15%)
reported that through providing reviews they learned how to think critically or how to
make critical judgements (e.g. ‘how to look at work critically that isn’t your own, it
helps make you a better critical thinker’). Others (13%) reported that it enabled them
to see others’ work from an assessor’s perspective (e.g. ‘looking at the work from a
markers point of view’) or that it helped them better understand the assessment stan-
dards, as illustrated by the following comments from two different students:
I was given a greater understanding of the level of work the course may be demand-
ing.
I learned what the standards were and what other people’s standards were.
Importantly, the majority of the students (68%), through their survey comments,
reported that reviewing resulted in their reflecting back on their own work and/or in
their transferring ideas generated through the reviewing process to inform that work
as the following extracts show.
When giving advice to people on theirs it gave me greater perception when reviewing
my own work by listening to my own advice for example.
I had a chance to see other people’s work and aspects of their work that I felt were
lacking in my work – this helped me to improve my work.
From identifying missing pieces in other people’s work I was able to amend my own.
112 D. Nicol et al.
Also notable in the survey data reported above is that comments about receiving
reviews tended to focus more on subject content (i.e. 63% of the total comments
were about areas in need of improvement or that needed clarification, etc.),
whereas those about producing reviews focused more on learning processes
(i.e. 96% of total comments were about critical thinking, taking the assessors
perspective and transfer of learning, with the majority being about learning
transfer).
Cho and Cho (2011) have shown that producing reviews for peers leads to
greater improvements in students’ written assignments than the receipt of reviews
from peers. The findings above add to this prior research by providing insights,
from the students’ perspective, into the cognitive processes that might account for
these different effects. In particular, the students’ own accounts suggest that review-
ing is especially effective in triggering some powerful mental processes, including
critical thinking, the active interpretation and application of assessment criteria,
reflection and learning transfer – processes that are normally associated with high-
levels of academic achievement. In essence, these findings suggest that the practice
of reviewing offers considerable potential, arguably even beyond what might be
possible through received feedback, for teachers wishing to develop students own
thinking and assessment skills.
I think you need both parts but you gain more yourself from giving it as you’re
analysing your own and theirs.
Students also reported through the focus group discussions that, in comparison with
receiving reviews, producing reviews involved them in thinking critically and in
learning to be critical. Many students noted that, if they developed this capacity for
critical thinking, then this would help them to make more objective judgements
about their own work. For example, as one student pointed out:
Giving it is better because that’s what you need to learn – how to be critical of your
own work – how to stand back – and where to be judgemental.
Another benefit highlighted by some students in the focus group discussion was the
idea that reviewing gave them more control over, or more responsibility for, feed-
back processes:
If you’re reviewing it yourself you are more likely to learn as a whole and be able to
apply things in the future. Whereas if you’re just reading someone’s feedback,
Assessment & Evaluation in Higher Education 113
probably because of how we’ve always learned, you are supposed to take it on board
and apply it but you think yes that’s how I can improve but you don’t do anything.
I think it’s more useful if you’ve had to go away and do it yourself … [produce feed-
back] … rather than rely on others’ feedback.
Some students also commented that the act of producing feedback on the work of
peers had reduced their need for, and the value of, feedback from peers. As one
student noted:
I’d already made improvements by the time it came to actually reading what people
had said about mine, I’d already spotted those things.
Indeed, in the focus groups, over half the students interviewed reported that they
had actually updated their own work after the reviewing activities, and before they
received the reviews. Many also reported that having done this, the receipt of
reviews from others did not add to the process.
Reviewing was also seen by some students to address a common limitation of
received feedback, namely that it is usually framed with reference to what has been
produced and that it does not necessarily push the student to think beyond the con-
fines of their own production, to open up new avenues of inquiry, new perspectives
and ways of thinking about the work they have produced. This argument is cap-
tured in the following student comment:
For me it would probably be to give feedback because I think seeing what other peo-
ple have done is more helpful than getting other people’s comments on what you have
already done. By looking at other people’s work you can see for yourself what you
have forgotten or not even thought about. When people give feedback on yours they
generally just talk about what is there. They don’t say, well I did this on mine and
you could put that in yours.
Exposure to others’ work through reviewing was also seen as motivational to some
students, incentivising them to improve the quality of their own work, as the fol-
lowing extract shows:
Seeing the quality of other’ work was a bit of a shock, I was, yes, I really need
to step mine up, but then it was fine because we could then go and improve on
it.
The results presented in this sub-section are very important. They suggest that,
through reviewing the work of peers, students can learn to take control over
their own learning, to generate their own feedback and to be more critical about
their own work. As students themselves reported, reviewing not only puts feed-
back processes in their hands, but it also reduces their need for received feed-
back from others. In addition, some students in the focus groups went further,
as shown in the quote above, by noting that reviewing brings into view new
perspectives on their own work, perspectives that might not become available
through received feedback. Overall, these findings suggest that peer reviewing
offers great promise as a method through which students might develop their
capabilities as independent and self-regulated learners, seen as one of the main
114 D. Nicol et al.
goals of higher education (Nicol and MacFarlane-Dick 2006; Boud and Associ-
ates 2010).
Cognitive processes activated when producing reviews: survey and focus group
findings
In the survey, students were asked to comment on how they carried out the peer
review, that is, ‘how they evaluated the quality of the work to provide responses to
the peer review questions’. Thirty-seven students answered this question. Over 50%
of them wrote about how they had used their own work as the benchmark for the
reviewing activity. The actual word ‘compare’ was used by 32% of the students
who responded to this item (e.g. ‘I compared it against my own work and examples
given by the lecturer’, ‘I compared the reviews to my own to see if it was better or
worse and what they could do to change it’).
In the focus groups, students were also asked how they carried out the review
activity. The following comment made by one student and confirmed by others pro-
vides deeper insight into this comparative process:
I read it through and compared it with what I had done to see if they had put something
I had not … The four questions were useful as they provided a framework for the
review. If we hadn’t had the questions it would have been difficult. I did the reviews
separately and then answered one then the other. The first was a better standard than
the other – so I used the ideas from the better one to comment on the weaker one. I
also read the guidelines … [the review questions] … when I did the peer review. There
were ideas from the good one that I hadn’t even thought of in mine.
As in the survey responses, this student talks about ‘comparing’ the work of others
with what she has produced. It appears that because she has produced work in the
same domain as her peers, she already has an ‘internal’ standard with which to eval-
uate the peers’ work. Hence this comparison of the peers’ work against this standard
inevitably results in a backward reflection on the student’s own work. However, the
process is more complex than this. This student also reports making comparative
judgements across the reviews using her evaluation and interpretation of one assign-
ment to comment on another, with the review questions informing the written feed-
back responses. This demonstrates the value of requiring multiple reviews.
In the survey, students also commented on the use of the review questions as a
framework for their analysis of the peer assignment or for their commentary (e.g. ‘I
analysed the assignment in the context of the review question’, ‘I used the review
questions to help formulate my commentary’). In the focus groups, the effects of
the review questions were probed further. The following are typical comments from
students in different focus groups about the impact of those questions:
You compare it [the other student’s work] to the criteria, but then in the back of your
mind you’re comparing it to your own at the same time. So you’re kind of seeing the
bad points compared to yours and the good points where you can do better on your
own.
I went down the questions and compared it to my own … I was trying to think what
has this person done. Have they put in more effort or more knowledge than me?
Assessment & Evaluation in Higher Education 115
You’ve got what you’ve done in the back of your mind whilst you’re going over
theirs so you see where you’ve gone wrong without anyone pointing it out so you
learn it yourself.
What is notable here is that, even whilst discussing the use of the review questions,
all students still allude, in different ways, to a background reflective process involv-
ing an active comparison of the other’s work with their own. Arguably, this reflec-
tive process, which depends on students having produced work in the same domain
as their peers, is one of the defining features of peer reviewing. Indeed, this type of
reflective comparison would not occur if students were merely asked to review a
published article or to provide an explanation of ideas to other students as in peer
tutoring (Roscoe and Chi 2008). This suggests that the benefits of peer reviewing
do not just derive from producing explanations, one of the interpretations offered
by Cho and MacArthur (2010), but rather from students producing critical reviews
which are grounded in comparison with their own work.
Further insight into the reviewing process emerged from a discussion in one
focus group where members compared peer reviewing with the receipt of teacher
feedback.
I think when you are reviewing (the work of peers), it’s more a self-learning process,
you’re teaching yourself, well, I can see somebody’s done that and that’s a strength,
and I should maybe try and incorporate that somehow into my work. Whereas getting
(teacher) feedback you’re kind of getting told what to do. You’re getting told this is
the way you should be doing it, and this is the right way to do it. You’re not really
thinking for yourself … I think it [reviewing] would help you not need so much of
teacher feedback if there was more of this. Whereas, I think if you’re not being able
to do this [reviewing] then you will always be needing more. [teacher feedback]
From this comment, it is clear that this student perceives reviewing as an active and
self-regulatory learning process, in contrast to receiving feedback reviews, which
instead is characterised as a telling process. This perception resonates with argu-
ments in the research, that transmission is a flawed way to think about learning
from feedback (Sadler 2010), and with the findings reported earlier that reviewing
gave students a sense of control over their own learning. This student also locates a
key benefit deriving from such regulatory feedback activities, namely, a reduced
dependence on the teacher for feedback.
Discussion
As mentioned in the introduction, recent research has identified peer review as a
fertile context for enhancing student learning through feedback processes. However,
whilst that research has demonstrated performance improvements, both when stu-
dents receive feedback reviews from peers (Cho and MacArthur 2010) and when
they produce feedback reviews for peers (Cho and Cho 2011), little is known about
the learning mechanisms that might account for these improvements. The study
reported here advances this research by exploring, from the students’ perspective,
116 D. Nicol et al.
how receiving and producing reviews differ and, importantly, by teasing out the
cognitive processes activated by reviewing, the most under-researched aspect of
peer review, and by highlighting the role of these processes in the enhancement of
student learning.
From the results reported, it is clear that students are keenly aware that receiving
feedback reviews involves different learning benefits and processes from producing
feedback reviews. Receiving reviews is seen by students as beneficial primarily
because it alerts them to deficiencies or gaps in their work, or because it sensitises
them to the different ways in which readers might interpret what they have written.
Providing reviews, instead, is viewed as beneficial because it engages students
actively in critical thinking, in applying criteria, in reflection and, through this, in
learning transfer. These latter cognitive processes, activated through reviewing, and
their theoretical and practical implications, are discussed below.
One question arising with regard to these evaluative processes is to what extent
the requirement that students produce a self-review after completing the two peer
reviews acted as the driver for the students’ backward reflection on their own work.
This is difficult to establish, as in the focus groups students did not indicate this as
a causal factor. However, in two further investigations of peer review by the lead
author conducted since this study, students reported engaging in the same reflective
processes, even though there was no requirement for self-review. This latter finding
suggests that peer review on its own does indeed encourage reflection; however, it
does not establish what added value, if any, is realised by having students consoli-
date their reflections by writing them down. This issue warrants further research. In
the meantime, however, it might still be wise to include self-review in peer review
designs, given that maximising reflection on students’ own work is a fundamental
learning objective.
that students might be able to use the teacher-provided criteria to help calibrate and
strengthen their own evaluative capabilities; second, that engagement with the
teacher-provided criteria might be enriched through their use alongside student-pro-
duced criteria. Speculating further, it might be argued that, through reviewing,
students generate richer criteria than those provided by the teacher, but sounder
criteria than those they might be able to formulate themselves.
strengthen inner feedback processes and enable students to compare and calibrate
inner and external feedback, in ways that support their learning.
Second, reviewing addresses a development need that, arguably, is not fully
tackled through higher education curricula. In their future careers, most graduates
are likely to encounter situations where they are required to appraise and comment
on the work or performance of others. Hence, one would expect feedback practices
in higher education to echo these requirements. Yet, this is not the case; most
students neither receive practice in producing feedback nor, indeed, practice in mak-
ing sense of feedback when it is received from multiple sources (Nicol 2011).
Teachers could, however, easily address both these issues by ensuring that peer
review activities are given a more prominent role, than currently happens, in higher
education curricula.
Conclusion
The research reported in this paper throws new light on the theory and practice of
feedback in higher education. It shifts the focus of analysis firmly away from old
delivery models of feedback, which cast the teacher as the transmitter of feedback
messages to students conceived as passive relays. However, whilst it takes on board
more recent theoretical positions which recognise the importance of an active role
120 D. Nicol et al.
for learners in constructing meaning from received feedback, it goes further than
those positions in that it identifies conditions which would make these processes of
construction even richer and more productive. These conditions involve the staging
of feedback in peer review contexts, where feedback production is recognised as
just as valuable for learning as feedback receipt. Such staging will not only result
in students gaining a deeper insight into subject matter but, crucially, it will also
enable them to acquire skills which are currently not explicitly developed through
the curriculum, even though they constitute an important requirement in profes-
sional life beyond university. These skills include the ability to engage with and
take ownership of evaluation criteria, to make informed judgements about the qual-
ity of the work of others, to formulate and articulate these judgments in written
form and, fundamentally, the ability to evaluate and improve one’s own work based
on these processes.
Acknowledgements
The authors would like to thank Dr Michela Clari of the University of Edinburgh for her
many perceptive and constructive feedback comments which helped improve the quality of
this manuscript. We would also like to thank JISC UK for funding the PEER Project which
allowed us to research this topic, and for their further funding for the PEER Toolkit project.
Details of these projects can be found at www.reap.ac.uk Last, and not least, we thank the
Design Engineering students for providing such deep insight into the mental processes
elicited by peer review activities. This went beyond what we had anticipated when we
designed the survey and focus group protocols.
Notes on contributors
David Nicol is an emeritus professor of Higher Education at the University of Strathclyde.
He has published extensively on assessment and feedback in higher education from a
pedagogical, technological and institutional change management perspective (see www.reap.
ac.uk).
Caroline Breslin is a learning technology advisor for the Faculty of Engineering at the
University of Strathclyde. Her research publications are in the area of technology-enabled
teaching and learning.
References
Bailey, R., and M. Garner. 2010. “Is the Feedback in Higher Education Assessment Worth
the Paper it is Written on? Teachers’ Reflections on Their Practices.” Teaching in Higher
Education 15 (2): 187–198.
Barr, R. B., and J. Tagg. 1995. “From Teaching to Learning: A New Paradigm for Under-
graduate Education.” Change 27 (6): 13–25.
Boud, D. 2007. “Reframing Assessment as if Learning was Important.” In Rethinking
Assessment for Higher Education: Learning for the Longer Term, edited by D. Boud and
N. Fachikov, 14–25. London: Routledge.
Boud, D., and Associates. 2010. Assessment 2020: Seven Propositions for Assessment
Reform in Higher Education. Sydney, Australia: Australian Learning and Teaching
Council. www.assessmentfutures.com.
Butler, D. L., and P. H. Winnie. 1995. “Feedback and Self-Regulated Learning: A Theoreti-
cal Synthesis.” Review of Educational Research 65 (3): 245–281.
Assessment & Evaluation in Higher Education 121
Carless, D., M. Salter, M. Yang, and J. Lam. 2011. “Developing Sustainable Feedback Prac-
tices.” Studies in Higher Education 36 (4): 395–407.
Cartney, P. 2010. “Exploring the Use of Peer Assessment as a Vehicle for Closing the Gap
Between Feedback Given and Feedback Used.” Assessment & Evaluation in Higher
Education 35 (5): 551–564.
Cho, K., and C. MacArthur. 2011. “Learning by Reviewing.” Journal of Educational Psy-
chology 103 (1): 73–84.
Cho, Y. H., and K. Cho. 2011. “Peer Reviewers Learn from Giving Comments.” Instruc-
tional Science 39 (5): 629–643.
Cho, K., M. Cho, and D. J. Hacker. 2010. “Self-Monitoring Support for Learning to Write.”
Interactive Learning Environments 18 (2): 101–113.
Cho, K., and C. MacArthur. 2010. “Student Revision with Peer and Expert Reviewing.”
Learning and Instruction 20 (4): 328–338.
Cowan, J. 2010. “Developing the Ability for Making Evaluative Judgements.” Teaching in
Higher Education 15 (3): 323–334.
Crisp, B. 2007. “Is it Worth the Effort? How Feedback Influences Students’ Subsequent Sub-
mission of Assessable Work.” Assessment & Evaluation in Higher Education 32 (5):
571–581.
Falchikov, N. 2005. Improving Assessment through Student Involvement. London:
Routledge–Falmer.
Higher Education Funding Council for England. 2011. The National Student Survey: Find-
ings and Trends 2006–2010. Bristol: Higher Education Funding Council for England.
Hounsell, D. 1997. “Contrasting Conceptions of Essay-Writing.” In The Experience of
Learning, edited by F. Marton, D. Hounsell, and N. Entwistle, 106–125. Edinburgh:
Scottish Academic Press.
James, R., K.-L. Krause, and C. Jennings. 2010. The First Year Experience in Australian
Universities: Findings from 1994–2009. Melbourne: Centre for Higher Education Stud-
ies: University of Melbourne.
Kaufman, J. H., and C. D. Schunn. 2011. “Students’ Perceptions about Peer Assessment for
Writing: Their Origin and Impact on Revision Work.” Instructional Science 39:
387–406.
Liu, N., and D. Carless. 2006. “Peer Feedback: The Learning Element of Peer Assessment.”
Teaching in Higher Education 11 (3): 279–290.
MacLellan, E. 2001. “Assessment for Learning: The Differing Perceptions of Tutors and Stu-
dents.” Assessment & Evaluation in Higher Education 26 (4): 307–318.
Mayes, T., F. Dineen, J. McKendree, and J. Lee. 2001. “Learning from Watching Others
Learn.” In Networked Learning: Perspectives and Issues, edited by C. Steeples and C.
Jones, 213–228. London: Springer.
Nicol, D. 2010. “From Monologue to Dialogue: Improving Written Feedback in Mass
Higher Education.” Assessment & Evaluation in Higher Education 35 (5): 501–517.
Nicol, D. 2011. Developing the Students’ Ability to Construct Feedback. Gloucester: Quality
Assurance Agency for Higher Education. http://tinyurl.com/avp527r.
Nicol, D. 2013. “Resituating Feedback from the Reactive to the Proactive.” In Feedback in
Higher and Professional Education: Understanding it and Doing it Well, edited by D.
Boud and E. Molloy, 34–49. Oxon: Routledge.
Nicol, D., and D. Macfarlane-Dick. 2006. “Formative Assessment and Self-Regulated Learn-
ing: A Model and Seven Principles of Good Feedback Practice.” Studies in Higher Edu-
cation 31 (2): 199–218.
Pearce, J., R. Mulder, and C. Baik. 2009. Peer Review: Case Studies and Practical Strate-
gies for University Teaching. Melbourne: Centre for the Study of Higher Education, Uni-
versity of Melbourne.
Price, M., K. Handley, and J. Millar. 2011. “Feedback: Focusing Attention on Engagement.”
Studies in Higher Education 36 (8): 879–896.
Price, M., and B. O’Donovan. 2006. “Improving Student Performance through Enhanced
Student Understanding of Criteria and Feedback.” In Innovative Assessment in Higher
Education, edited by C. Bryan and K. Clegg, 100–109. London: Routledge.
Roscoe, R., and M. Chi. 2008. “Tutor Learning: The Role of Explaining and Responding to
Questions.” Instructional Science 36: 321–350.
122 D. Nicol et al.