higheredu-01-00002

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Article

Improving Student Feedback Literacy in e-Assessments:


A Framework for the Higher Education Context
Tarid Wongvorachan 1, * , Okan Bulut 2 , Yi-Shan Tsai 3 and Marlit A. Lindner 4

1 Measurement, Evaluation, and Data Science, University of Alberta, Edmonton, AB T6G 2G5, Canada
2 Centre for Research in Applied Measurement and Evaluation, University of Alberta,
Edmonton, AB T6G 2G5, Canada
3 Department of Human Centred Computing, Monash University, Clayton, VIC 3800, Australia
4 IPN-Leibniz Institute for Science and Mathematics Education, University of Kiel, 24118 Kiel, Germany
* Correspondence: wongvora@ualberta.ca

Abstract: For students, feedback received from their instructors can make a big difference in their
learning by translating their assessment performance into future learning opportunities. To date,
researchers have proposed various feedback literacy frameworks, which concern one’s ability to
interpret and use feedback for their learning, to promote students’ feedback engagement by reposi-
tioning them as active participants in the learning process. However, the current feedback literacy
frameworks have not been adapted to digital or e-Assessment settings despite the increasing use of
e-Assessments (e.g., computer-based tests, intelligent tutoring systems) in practice. To address this
gap, this conceptual paper introduces a feedback literacy model in the context of e-Assessments to
present an intersection between e-Assessment features and the ecological model of feedback literacy
for more effective feedback practices in digital learning environments. This paper could serve as
a guideline to improve feedback effectiveness and its perceived value in e-Assessment to enhance
student feedback literacy.

Citation: Wongvorachan, T.; Bulut, Keywords: feedback literacy; e-Assessment; educational feedback
O.; Tsai, Y.-S.; Lindner, M.A.
Improving Student Feedback Literacy
in e-Assessments: A Framework for
the Higher Education Context. Trends 1. Introduction
High. Educ. 2022, 1, 16–29. https://
In education, feedback is a crucial element that translates student performance in-
doi.org/10.3390/higheredu1010002
formation into a learning opportunity [1]. Formally defined, feedback is a process where
Academic Editor: Janika Leoste learners make sense of various performance-related information to inform their learn-
Received: 7 November 2022
ing development [2,3]. Feedback varies upon the purpose of assessment, with formative
Accepted: 2 December 2022
feedback as a part of the continual learning process (i.e., assessment for learning) and
Published: 6 December 2022
summative feedback as the indicator of learners’ success (i.e., assessment of learning) [4].
This paper focuses on the formative use of feedback, since we intend to regard feedback as
Publisher’s Note: MDPI stays neutral
an iterative process between educational actors (e.g., teachers and students). Serving its
with regard to jurisdictional claims in
purpose, feedback can instill changes in students at four levels: (1) task level—the level
published maps and institutional affil-
that informs the correctness of students’ response; (2) process level—the level that informs
iations.
the thought process to perform the task; (3) self-regulated learning level—the level that
concerns students’ in their monitoring of action; (4) self level—the level that concerns
personal characteristic of students themselves [5]. To date, most of the research studies on
Copyright: © 2022 by the authors.
feedback examine input of the process (e.g., types of feedback and mode/timing of delivery,
Licensee MDPI, Basel, Switzerland. see Mertens et al. [6] for a meta-analysis), but few studies examine the other end: output
This article is an open access article (e.g., feedback interpretation and use) [7]. To promote feedback practice, more research
distributed under the terms and is needed further down the feedback loop, such as feedback appreciation, engagement,
conditions of the Creative Commons effects, and use; most of which situate under the term “feedback literacy” [8–10].
Attribution (CC BY) license (https:// To date, the current feedback practice is still sub-optimal in terms of student satis-
creativecommons.org/licenses/by/ faction, engagement, and use of feedback due to its reliance on the transmission-centred
4.0/). framework, which views feedback as static information that goes one way from teachers to

Trends High. Educ. 2022, 1, 16–29. https://doi.org/10.3390/higheredu1010002 https://www.mdpi.com/journal/higheredu


Trends High. Educ. 2022, 1 17

students [11,12]. Based on this framework, students’ role is perceived only as processing,
while teachers position themselves as the higher actor in the process [2,13,14]. Additionally,
teachers using this feedback approach may be over-reliance on generic-formatted feedback,
which could further reduce student engagement because they could perceive feedback as
irrelevant [15]. However, recent literature on feedback argues that feedback processing is
an interactive and cyclical process between students, teachers, and the feedback itself [16].
This interactive approach to feedback could lead to a positive learning experience, en-
hanced competency, and improved reflective practice in students; however, the literature
has not been contextualized to the changing nature of learning and teaching [17,18]. In
practice, despite the abrupt change in the mode of teaching since during the onset of the
COVID-19 pandemic (i.e., from in-person to online), teachers were found to struggle with
their provision of teaching and feedback in online environments; this setback could be
attributed to the lack of feedback guidelines in the e-Assessment context [19].
Beyond the traditional written forms of assessment, e-Assessment (hereafter, we will
use the term e-Assessment as an alternative to digital assessment) has gained widespread
popularity because of the rapid rate of technological and digital advancement and the
urgent need for online learning as a result of recent COVID-19 disruptions [7,20]. e-
Assessment is a component of technology-assisted classroom environment (or e-learning)
that gains advantages over traditional assessment in terms of automation, digitization, and
timeliness [20,21]. This innovative assessment format is prevalent in modern education
as it allows educators to handle a large number of examinees, offer virtual classes over a
long distance, and utilize a variety of teaching materials for assessment administration and
feedback delivery [6,22]. e-Assessment is adopted in both K-12 and higher education, but
our primary focus is on higher education to illustrate the literacy framework through our
context of interest [23].
Several institutions of higher education adopt learning management systems (LMS)
such as Blackboard™ and E-class to deliver e-Assessment to students. Age-wise, the e-
Assessment is mostly used by undergraduate students (18 years old and above), but age
was found to have no significant effect on the perceived utility of the assessment as its
feedback [4,24]. The adoption of an e-Assessment allows educators to take advantage of
the e-Assessment, such as its automation and the leveraging of a data-rich environment.
While this advantage is convenient, e-Assessment could be further improved in terms of
perceived usefulness and acceptance from students. Thus, this context has become our
focus as an effort for the betterment of e-Assessment practice [25,26].
In this conceptual paper, we apply the existing feedback literacy framework from
Carless and Boud [3] and Chong [27] to the e-Assessment context as a focused guideline
for improved feedback practice. The establishment of a context-specific application could
also allow instructors to leverage the benefit of e-Assessment to support students in their
development of feedback literacy through the use of e-Assessment distinctive features such
as its digital environment [21]. Furthermore, the use of the context-specific application
in feedback literacy could make the intervention from instructors (e.g., consultation to
students about feedback usage) more relevant to students’ situated context than the usage
of the general feedback literacy model [28]. This paper begins with a discussion of feedback
literacy models, as well as their relevance to self-regulated learning. Then, we move our
discussion toward the dimension of feedback literacy in the e-Assessment setting and
conclude the paper with suggestions to enhance feedback literacy in the mentioned context.

2. Feedback Literacy and Learner-Centered Framework


Explicitly defined, feedback literacy is student ability to understand, interpret, and
use their feedback to improve their learning development [3,18]. The concept repositions
students from passive recipients to active learners to promote shared responsibility in the
process while minimizing unilateral decisions from the teacher side [2,14]. This conceptual-
ization aligns with the metaphor of feedback as a learners’ tool and feedback as a dialogue
that is appropriate to the learner-centered framework [29]. In the learning process, learners
Trends High. Educ. 2022, 1 18

should commit to their understanding of feedback purpose, their position in the feedback
loop, and be aware of the specific aspect of feedback that would inform their development
by actively processing feedback [30]. At the same time, instructors should perceive feedback
as a part of the learning process instead of a pedagogical chore to foster students’ feedback
literacy [31]. In fact, instructors should promote conversation and collaboration between
educational actors (i.e., student with student, student with instructors) [32,33]. This type of
practice could promote feedback literacy by fostering learner agency and self-regulation,
which are essential to the effectiveness of feedback [2,14,34]. On the basis of the general
feedback literacy concept, this section discusses the original model of Carless and Boud [3]
and the extended ecological model of Chong [27].

2.1. The Original Model of Feedback Literacy


Figure 1 depicts the original feedback literacy model [3]; the model explains the
student’s feedback literacy by the four components of feedback—appreciation, judgement
making, feedback-associated emotion managing, and feedback-related action taking [3]. To
translate the received feedback into actions as explained by the original model, students
should understand the meaning of the received feedback and be aware of their role in the
learning process as indicated in the feedback appreciation component [3]. Then, students
should evaluate their relative performance both related to themselves (i.e., previous result)
and related to their peers to inform their improvement strategy as reflected in the making
judgments component [3,35]. As described in the management of the affected aspect,
students must handle the emotional impact of feedback, such as directness or perceived
trust in instructors, to minimize the bias they may have on their performance [3,36]. Finally,
students can translate the received feedback into actions with self-regulation to adjust their
learning process with the obtained information [3,37]. The concept of feedback literacy
is important because students with low feedback literacy can be adversely affected by
feedback, whereas students with high feedback literacy can have advantages in learning [3].
Generally, feedback literacy can be developed via peer feedback, analyzing work samples,
and teacher facilitation, most of which can be incorporated into a curriculum [3,38].

Figure 1. The Original Feedback Literacy Model by Carless and Boud [3].

2.2. The Ecological Model of Feedback Literacy


From the original feedback literacy model, Chong [27] has developed the ecologi-
cal model of feedback literacy by adding contextual and individual dimensions to the
already existing feedback literacy model of Carless and Boud [3]. In other words, the
ecological feedback literacy model has three dimensions (i.e., engagement, contextual,
and individual). Each dimension encompasses aspects as follows: the original feedback
literacy model (aka the engagement dimension) is divided into the cognitive aspect (i.e.,
feedback appreciation and judgment making), affective aspect (i.e., affect management),
and behavioural engagement aspect (i.e, action taking); the contextual dimension is di-
vided into context, interpersonal, textual, instructional, and socio-cultural aspects; finally,
Trends High. Educ. 2022, 1 19

the individual dimension is divided into beliefs, goals, learner, experience, and abilities
aspects [27]. Chong [27]’s framework views students as an organism that interacts with
their surrounding environment, hence the name ecological. In this model, student engage-
ment to feedback is tied to their learning context as informed by socio-cultural theories [27].
This notion aligns with self-regulated learning, which will be discussed in more detail in
the following section. In addition to the engagement dimension of the original model,
contextual dimension—the sociocultural factor surrounding the learners themselves (e.g.,
feedback feature, relationship with teachers, influence from teaching style, and other socio-
cultural factors)— and individual dimension—factors about individual differences (e.g.,
feedback-related belief/experience, learning goal, and student ability of the subject)— are
added [27,39]. The influence of the three dimensions on students’ feedback uptake could
be further supported by the mechanism of feedback processing by Lui and Andrade [16],
which posits that for students to exhibit a behavioural response to feedback, characteristics
of the feedback itself, students’ motivation to feedback, and students’ response to feedback
in terms of emotion, interpretation, and decision play their own part in feedback processing.
In this extended model, the ecological factors of contextual and individual dimensions
affect the development of the engagement dimension in feedback literacy by shaping how
students process their feedback. For that, students from different institutions may handle
feedback differently due to their contextual differences (e.g., curriculum and educational
norm) [18]. More specifically, the feedback should take student context (i.e., external
factors) and characteristics (i.e., internal factors) into account, as both factors could affect
how students experience their feedback [40,41]. The alignment of all three dimensions is
also crucial in mediating the development of feedback literacy [27]. For example, students’
tendency to put their feedback to use could be affected by adequate feedback understanding,
which can be influenced by the ability to really judge their own work based on data from
learning analytics (leveraging of students’ learning data to inform learning and teaching,
including feedback provision [42]), and the ability to positively manage their feedback-
related emotions resulted from the perceived care and respect of their instructors [43].
In this example, learning analytics data can be seen as a contextual influence, whereas
perceived care and respect are individual influences.

3. Self-Regulated Learning and Feedback Literacy


This section discusses feedback literacy with a focus on the theory of self-regulated
learning (SRL) for a more complete picture. SRL—an ability to manage various aspects
of oneself for their learning development— is important to feedback literacy as students
engage in self-monitoring through interactions with feedback from external sources such
as peers or instructors [37,44]. For online learning, the development of feedback literacy in
students is especially important as it necessitates strong SRL skills due to the high flexibility
of the online environment [37,45]. Panadero [46] organizes the SRL theory into three main
areas—cognition for feedback appreciation and making judgments, motivation for action-
taking aspects, and emotion for managing feedback-related affect— which align with the
feedback literacy components (i.e., feedback appreciation, making judgments, managing
affect, and taking action). Additionally, Panadero [46] also identifies that instructors can
create environments that are favorable toward the development of students’ SRL skills by
minimizing the cognitive load of the task; this factor can be understood as an environmental
aspect that, even though not mentioned within the main areas, is another variable that
aligns with the contextual dimension of feedback literacy.
In the overarching picture, approaches to feedback can differ by the underlying learn-
ing theories such as behaviorism (i.e., feedback as a catalyst of behavioral change), social
learning (i.e., feedback as interactions between instructor and students), metacognition (i.e.,
feedback as a part of cognitive development), and constructivism (i.e., feedback as a product
from interactions that is used to construct new knowledge) [41]. However, the latter three
are the most relevant to learner-centered feedback framework as they regard feedback as an
integral part of the learning process that relies on both internal factors (e.g., self-regulation,
Trends High. Educ. 2022, 1 20

motivation, and ability to change) and external factors (e.g., the interaction between ac-
tors, feedback feature, and culture) to promote student self-regulation [18,40,41,47]. This
notion aligns with the ecological feedback literacy model that puts forth the importance
of individual and contextual dimensions to feedback literacy. Goal setting, persistence,
invested effort, self-efficacy, and learning tactics are the most significant among predictors
of student self-regulated learning [27,46,48]. Learning strategies, in particular, are very
relevant to the e-Assessment context as they can be determined from trace data through
learning analytic techniques [48]. Additionally, goal orientation was found to shape realistic
students’ understanding of feedback, thus, reducing maladaptive defense mechanisms in
the process [49].
Regarding the external factors, quality feedback can empower students’ feedback
literacy, which extends to student self-regulation by raising their awareness of strengths
and weaknesses to inform their cognition, behavioral, and motivational/affection aspects
in learning. This could lead to autonomy in self-improvement and the internalization of
feedback as students are able to make evaluative judgments for themselves [9,31,44]. The
discrepancy between the learner’s expected progress towards their goals and the actual
progress made could also lead to different affective reactions that impact students’ learning.
Thus, being aware of strengths and weaknesses could reduce the discrepancy between
students’ expected progress and their actual progress, which could be helpful in their
learning as they know where they are in terms of class performance, and what they need to
do to attain the expected learning outcomes [44].
The aforementioned ideas show that feedback and self-regulation have a reciprocal
relationship with each other—effective feedback enhances student self-regulation and vice
versa [50]. More specifically, feedback literacy can be developed through self-regulation by
enhancing student ability to understand their feedback and evaluate their own work to
inform their improvement strategy [35,41]. In turn, the improved understanding and use
of external feedback can serve to aid students in self-monitoring to adjust their learning to
better align with their goals [37,46].

4. Dimensions of Feedback Literacy in e-Assessment


It is important to develop the feedback literacy framework for the e-Assessment
context as there are differences between the traditional classroom assessment and e-
Assessment on aspects such as peer feedback and the effect of instructors’ presence on
students’ self-regulation in assessment [38,51]. Here, we will apply Chong’s feedback
literacy model [27] to our context of interest as it is considered as an ecological extension
of Carless and Boud [3] model. Figure 2 depicts our conceptualization of feedback literacy
in the e-Assessment context. As e-Assessment is a part of a technology-assisted classroom
environment, the e-learning domain could be considered as an environmental factor of e-
Assessment and, thus, the two systems (i.e., e-learning and e-Assessment) are tied together
in most implementations [52].
Trends High. Educ. 2022, 1 21

Figure 2. Dimensions of Feedback Literacy in the e-Assessment Context (The green portion of the
figure is derived from the original feedback literacy model [3]. The yellow and blue portions of the
figure are derived from the extended ecological feedback literacy model [27]. The boxes represent the
contribution of the present framework in the e-Assessment context).

For the Contextual Dimension that focuses on external factors of the students, inter-
personal level context can be attributed to the use of computer-mediated communication
technology such as the integrated discussion forum or email as a conduit with the in-
structors for collaborative learning, peer feedback, and establishing trust [52]. Students
could improve their learning through the use of a built-in communication function in an
e-learning environment to communicate with their peers [53]. For textual level context,
the usage of digital feedback tools, such as ExamVis—a digital score reporting software or
OnTask—a learning analytics-based feedback generation software can provide feedback con-
tent based on students’ self-efficacy and self-regulation as computed via process data [8,41].
For instructional level context, the synchronicity (i.e., synchronous vs asynchronous) of
the course that is taught with an e-learning platform could affect the perceived difficulty
of the students to the course, including how they use their feedback as well because the
absence of an instructor in asynchronous courses may make the lecture unable to adapt to
students’ needs (e.g., answering questions, and adjusting the lecture pace) [54].
For the Engagement Dimension that is modeled after Carless and Boud [3] frame-
work, the feedback appreciation part of student cognitive engagement to feedback can be
related to the use of personalized or learning analytics-based feedback to make the content
more relatable to students such as the usage of students’ formative assessment data and
their time-use in feedback [8,41,55]. Instructors can also use innovative modes of feedback
such as interactive graph data (see Bulut et al. [8] for an example on ExamVis) and audio-
based feedback through e-learning platforms to digitally for feedback revisitation [3,25].
It is worth noting that the feedback appreciation part of the cognitive engagement as-
pect partly overlaps with the contextual dimension; for example, the usage of innovative
modes of feedback (e.g., interactive dashboard, video-based feedback) and personalized
information include both the contextual dimension and the feedback appreciation aspect
Trends High. Educ. 2022, 1 22

of the cognitive engagement dimension, because the characteristic of feedback itself can
be considered as external influences on student understanding of feedback. The making
judgment part of the cognitive engagement model can be related to the use of self-test
features that are usually available via LMS for students to evaluate themselves based on the
assessment criteria as they engage in their practice quizzes [56]. Regarding student affec-
tive engagement that concerns how students manage their emotional states resulting from
feedback, the tone of the feedback report issued from the e-Assessment can be influential
to how well students receive the message that houses information for their improvement;
for example, constructive-but-harsh feedback may be met with resistance from students
while feedback with caring tone could increase students’ engagement with the feedback
itself [3,57]. Moreover, students’ anxiety about e-Assessment in terms of usage can also
influence how well they perform on their test, including how much trust they put into the
feedback of the platform as well [58].
For the Individual Dimension that concerns individual differences to feedback, the
perceived usefulness of e-Assessment is a context-exclusive element that is related to
student belief in their feedback. In particular, the perceived usefulness of the e-Assessment
platform could influence student attitudes toward the platform, including their subsequent
intention to use feedback [58,59]. In addition, students’ expectation of e-Assessment in
terms of its advantage over the traditional assessment could affect their intention and
action in response to feedback [24]. Students who believe that e-Assessment is an effective
means of measuring their performance could also believe that results from the test are
accurate representations of their abilities, which in turn affect their tendency to use the
feedback as well. In terms of feedback-related experience, student familiarity with the
e-Assessment platform could also affect their trust in the precision of results provided
through the platform as they know how the platform works, which in turn affects their
tendency to act on feedback [21]. The perceived ease of use, perceived usefulness, and
the compatibility between test item features with academic tasks (e.g., calculation-type
items for mathematics course so that students can express their knowledge in mathematics
operation, or drag-and-drop sentence completion items for language course) are also related
to student acceptance of e-Assessment [58,59]. Lastly, the security of the test platform, such
as smart exam monitoring software or examine identification software, can also influence
the perceived credibility of the feedback as well [60].

5. Enhancing Students’ Feedback Literacy in e-Assessment


This section offers several suggestions to increase student feedback literacy in the speci-
ficity of the e-Assessment context and its component, as discussed earlier. The e-Assessment
environments offer opportunities for feedback literacy development in accordance with the
ecological model of Chong [27]. To enhance feedback literacy from the external perspective
(i.e., from the instructors’ side), the use of computer-mediated communication technol-
ogy (e.g., emails and forums—asynchronous, video chats and audio calls—synchronous)
could improve student feedback engagement by promoting a dialogic process between
students and instructors to foster a sense of community and make students’ voices be
heard [32,61]. The integration of semi-automatic feedback software such as ExamVis, On-
Task, or the student relationship engagement system in an e-Assessment environment could
provide learning analytic data for instructors to tailor relatable feedbacks that increase
student engagement by taking the appropriateness and context into account [8,41,43]. The
benefit of learning analytics to feedback literacy can be further leveraged by using pre-
dictors that are malleable and based on learning theory, such as formative assessment
data [55,62]. Instructors should strive to make sense of the data provided by learning
analytics by considering the data source, its interpretation, and the relationship of such
data to students’ learning [63]. The ability to deliver innovative feedback formats with
a score report software (e.g., graphs and audio) in a way that bypasses the time/space
barrier is also a helpful addition to the feedback process [11,64]. Moreover, students can
develop their evaluative judgment through online peer feedback [65]. On the whole, the
Trends High. Educ. 2022, 1 23

combination of feedback personalization, innovative modes of feedback, and the use of


computer-mediated communication could be more positively received by students and,
thus, increase their feedback engagement [3].
Aside from improving feedback features and promoting communication, we could also
use process-level feedback (see the definition by Hattie and Timperley [5] in the introduction
section) to make the content more actionable to students regarding their strengths and
weaknesses, because the ultimate goal of feedback is to influence students in their learning
trajectory that goes beyond the task-level. In fact, the need for process-level feedback is
supported by the fact that task-level feedback might not allow poor-performance students
to understand the error in their thinking because the feedback only tells them what the
correct answer is, not how to come up with the correct answer [64]. Therefore, process-level
feedback and beyond could be more beneficial in the long run as they instill lasting changes
to facilitate feedback use [9]. In addition, feedback should encourage students to use
available resources on their own to promote their role as active learners [66].
To craft feedback that instills lasting changes in students, instructors should provide
feedback with adequate specificity in terms of direction, for example, “You may want to
review the application of concept X to situation Y as explained in lecture Z” instead of a broad
statement such as “You should review lecture Z some more.” This idea is consistent with
Mertens et al. [6]’s work that elaborated feedback - the feedback that tells students of hints
on how to perform the task correctly, as opposed to giving direct answers, is effective in
terms of enhancing students’ learning outcomes. To aid the provision of process-level
feedback, natural language processing technology can be used to generate context-relevant
feedback from students’ data to encourage self-reflection and feedback uptake [67]. For
example, instructors can utilize a chatbot to automatically provide feedback with non-
technical language to increase the chance for self-reflection in students [68].
However, while teachers play a significant role in the feedback loop, it could be
more effective to focus the change on the student side as they are the main actor in their
learning [12]. To enhance feedback literacy from the internal perspective (i.e., from the
students’ side), instructors can enhance student self-regulation by providing all available
resources in e-learning, such as early opportunities for self-test to assist students in their
self-monitoring process [47,56]. In addition, students should be able to access their feedback
and associated learning analytic information so as to enhance their evaluative judgment
through self-monitoring further [3]. In fact, students should be trained to foster their
feedback literacy skills during their first year in higher education to leverage feedback
received from learning analytics [63]. Instructors could communicate the benefit of such
technology to students to raise student awareness and motivation for self-improvement [56].
Other than in-classroom resources, instructors could also utilize out-of-classroom resources
such as the Developing Engagement with Feedback Toolkit to develop feedback literacy in
students through workshops, consultations, and written booklets [69].
Furthermore, instructors could contribute to the perceived care and trust of the stu-
dents by using an instructor-mediated approach to feedback, on a condition that students
need to attend the class synchronously to open opportunities for interaction and under-
standing [17,43,54]. Instructors could also use a dialogic process in combination with
Socratic statements for students to reflect on their understanding of the concept and open
to the possibility of self-regulated learning [33,70–72]. By doing so, they could initiate
productive dialogues that minimize power differences and empower both parties in the
process, ultimately contributing to student feedback literacy [14]. Specifically for the
e-learning domain, including e-Assessment, the amount of communication between in-
structors and students matters in determining the amount of dialogue between parties [51].
A flexible-structured e-learning environment with a high amount of communication be-
tween instructors and students could encourage help-seeking behavior, which is an aspect
of self-regulated learning [51,73].
In addition to student self-regulation, their attitude and acceptance of the e-Assessment
could influence student experience to feedback as well [59]. Instructors could use sym-
Trends High. Educ. 2022, 1 24

pathetic language and encouragement in their feedback, especially with those who fail
the exam, to foster student acceptance of the feedback [25,57]. Furthermore, emotional
resistance to feedback is natural due to the complexity of human communication; therefore,
to foster student feedback literacy, it is important to teach them how to acknowledge,
accept, and manage their emotional state [9,36,40,49]. For example, instructors could com-
municate to students early in their study about feedback-associated emotions (e.g., how
direct critique might be off-putting but necessary) [36].
Technology-wise, instructors, and students should be trained to be familiar with the
e-Assessment before the actual implementation to increase their confidence in using the
platform [21]. Timely technical support during an e-Assessment session is also crucial in
reducing student computer anxiety that could affect their true performance in the test and
its resulting feedback [58]. Lastly, the security of the e-Assessment should be monitored
to ensure that the exam is rigorous to uphold the credibility of the scores that could affect
student acceptance in feedback [60]. This issue could be addressed with technologies such
as identity verification systems to ensure that students’ data is appropriately used and
kept [74]. Moreover, institutions should inform students of which information is being
collected to establish their trust in data privacy, which could maintain their trust in the
system and its provided feedback [75].

6. Conclusions and Practical Implications


This paper discusses frameworks and concepts that are relevant to feedback literacy
(i.e., models and SRL) and contextualizes them to the setting of e-Assessment. The proposed
ideas could contribute to both general and specific levels of the field. At the surface, this
paper could serve as a guideline to improve feedback effectiveness and its perceived value
for the effective use of e-Assessment to improve feedback literacy, ultimately to lessen
the conflict between instructors’ feedback practice and students’ expectations through
tailored feedback [76]. To best promote and sustain student feedback literacy, instructors
should move away from situating themselves in a higher position with feedback as a
unilateral process to the adoption of dialogic teaching where student-teacher interaction is
emphasized to develop inclusivity with students as active members of the process [13,32].
Despite benefiting from innovative data display in e-Assessment, human interaction is still
highly valued to support students in their understanding of such information [77]. In fact,
feedback that is delivered from e-assessment should be constructed in a human-friendly
format that takes students’ context into account to make it stimulating for students to
discuss their feedback with peers and teachers [78]. Moreover, all parties involved in the
e-Assessment should be familiar with the platform, as well as its functions such as self-test,
online forum, and the navigation of information to maximize advantages provided by the
assessment in addition to student ability of the subject matter [21,56].
In addition to the benefits of e-Assessment functions to the development of feedback
literacy, the purpose of feedback is also important to the matter. There are more opportuni-
ties to develop feedback literacy in formative assessment as feedback is regarded as a part
of the learning process [4]. For that, low-stakes or formative assessment is preferred over its
summative counterpart [25,27]. Additionally, high stakes in the assessment are oftentimes
associated with summative feedback, which makes students perceive less self-control and
foster a comparative environment in the “race-to-the-top” scenario that is detrimental to
student learning [49]. This conclusion aligns with literature as Khanna [79] found that
among classes with graded, ungraded, and no formative assessment, students in the un-
graded class performed better than other classes. Low-stakes formative assessment could
impose less stress and test anxiety on students, which explains why students could focus
better on taking risks and learning from their mistakes rather than emphasizing achieving
the perfect score [80].
The future of digital feedback in e-Assessment looks promising as the need for an
online learning environment is growing, and students seem to respond positively to the
platform [7]. In fact, the capability to overcome shortcomings of traditional assessment
Trends High. Educ. 2022, 1 25

(e.g., printing cost, the lack of automation, low test security) allows e-Assessment to be
popular among teachers as well [81]. The adoption of e-Assessment is also effective in
facilitating the development of most aspects of student feedback literacy (i.e., appreciating
feedback, making judgments, and taking actions) aside from the aspect of managing effect
and, therefore, elevating the quality of students’ learning as a whole [82]. To this end,
the proposed ideas and guidelines in this paper could further enhance student feedback
literacy as a whole and increase feedback use in addition to the existing practice.

7. Limitations and Future Research


Despite the given significant insights and discussions about the contextualized feed-
back literacy model for e-Assessment, there are some limitations to our work. The models
and suggestions made in this paper were developed based on the general function of
e-Assessment. However, the characteristics of each e-Assessment are contingent upon
the purpose of the course subject. Researchers should tailor their strategies to enhance
feedback literacy to each specific application of e-Assessment to make the application as
context-relevant as possible. This paper focuses on higher education as a case for our
application. The application of e-Assessment to other educational levels may differ as the
curriculum changes from K-12 to higher education. In fact, features of digital feedback
such as color, sound, and animation were found to negatively impact the emotional aspect
of fourth-grade students in their classroom feedback [64]. Next, the suggestions we made
in this paper are theoretical, as empirical evidence is missing to support our assumption in
practice. Lastly, this conceptual paper is built upon an extensive literature review instead
of an exhaustive literature search across databases (e.g., Proquest, Web of Science, or grey
literature) with clear inclusion/exclusion criteria. An exhaustive literature search might
yield a pattern of studies across time that could further support the argument we made in
this paper.
Future research could generate empirical evidence of the use of feedback from e-
Assessment to foster feedback literacy in higher education as e-Assessment is widely
adopted at the university level education [24]. Moreover, students can develop their eval-
uative judgment through online peer feedback [65]. On the whole, the combination of
feedback personalization, innovative modes of feedback, and the use of computer-mediated
communication could be more positively received by students and, thus, increase their
feedback engagement. Researchers could also consider contextualizing the framework
deeper into an intercultural perspective to avoid the pitfall of assimilationist practice, es-
pecially in an international setting where pedagogical practice and student approach to
education differ from one geographical location to another [18]. Researchers and instruc-
tors could also investigate student feedback literacy through quantitative or qualitative
methods to explore the utility of the proposed strategies in an operational context. In
fact, Bowen et al. [83] proposed a conceptual model of five factors that influence students’
use of feedback, which are mode of feedback, learning culture, the relationship between
students and teachers, teacher attribute, and students’ attitude to feedback. This model
could be adapted to our proposed model of feedback literacy in e-Assessment and used
to investigate its practicality in the real classroom setting empirically. For example, the
student’s attitude to feedback part in Bowen et al’s [83] model could be replaced with
perceived usefulness and perceived ease of use of the e-Assessment platform from the
individual dimension of our proposed model. Additionally, instructors could develop a
discipline-specific framework for the development of feedback literacy through the use of e-
Assessment to situate their practice within discipline-specific situations such as the usage of
automatically-graded programming assignment in the computer science education [28,84].
Trends High. Educ. 2022, 1 26

Author Contributions: Conceptualization, T.W. and O.B.; writing—original draft preparation, T.W.;
writing—review and editing, T.W., O.B., Y.-S.T. and M.A.L.; resources, Y.-S.T., M.A.L. and O.B.;
supervision, O.B. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable as this study did not involve humans or animals.
Informed Consent Statement: Not applicable as this study did not involve human participants.
Data Availability Statement: Not applicable as this study did not use or report any data.
Acknowledgments: We would like to acknowledge the contribution of Ka-Wing Lai for her contribu-
tions to the initial development of this project and the design of the visual component of this paper.
Conflicts of Interest: The authors declare no conflict of interest in the writing of the manuscript.

Abbreviations
The following abbreviations are used in this manuscript:

LMS learning management system


SRL self-regulated learning

References
1. Watling, C.J.; Ginsburg, S. Assessment, feedback and the alchemy of learning. Med. Educ. 2019, 53, 76–85. [CrossRef] [PubMed]
2. Boud, D. Challenges for reforming assessment: The next decade. In Proceedings of the International Virtual Meeting: Teaching,
Learning & Assessment in Higher Education, Iasi, Romania, 22 November–1 December 2021.
3. Carless, D.; Boud, D. The development of student feedback literacy: Enabling uptake of feedback. Assess. Eval. High. Educ. 2018,
43, 1315–1325. [CrossRef]
4. Marriott, P.; Teoh, L.K. Using screencasts to enhance assessment feedback: Students’ perceptions and preferences. Account. Educ.
2012, 21, 583–598. [CrossRef]
5. Hattie, J.; Timperley, H. The power of feedback. Rev. Educ. Res. 2007, 77, 81–112. [CrossRef]
6. Mertens, U.; Finn, B.; Lindner, M.A. Effects of computer-based feedback on lower-and higher-order learning outcomes: A network
meta-analysis. J. Educ. Psychol. 2022, 114, 1743. [CrossRef]
7. Jiao, H. Enhancing students’ engagement in learning through a formative e-assessment tool that motivates students to take action
on feedback. Australas. J. Eng. Educ. 2015, 20, 9–18. [CrossRef]
8. Bulut, O.; Cutumisu, M.; Aquilina, A.M.; Singh, D. Effects of digital score reporting and feedback on students’ learning in higher
education. Front. Educ. 2019, 4, 65. [CrossRef]
9. Carless, D. Feedback loops and the longer-term: Towards feedback spirals. Assess. Eval. High. Educ. 2019, 44, 705–714. [CrossRef]
10. Daniels, L.M.; Bulut, O. Students’ perceived usefulness of computerized percentage-only vs. descriptive score reports: Associa-
tions with motivation and grades. J. Comput. Assist. Learn. 2020, 36, 199–208. [CrossRef]
11. Tripodi, N.; Feehan, J.; Wospil, R.; Vaughan, B. Twelve tips for developing feedback literacy in health professions learners. Med.
Teach. 2021, 43, 960–965. [CrossRef]
12. Winstone, N.E.; Carless, D. Who is feedback for? The influence of accountability and quality assurance agendas on the enactment
of feedback processes. Assess. Educ. Princ. Policy Pract. 2021, 28, 261–278. [CrossRef]
13. Matthews, K.E.; Tai, J.; Enright, E.; Carless, D.; Rafferty, C.; Winstone, N. Transgressing the boundaries of ‘students as partners’
and ‘feedback’ discourse communities to advance democratic education. Teach. High. Educ. 2021, 1–15. [CrossRef]
14. Winstone, N.E.; Pitt, E.; Nash, R. Educators’ perceptions of responsibility-sharing in feedback processes. Assess. Eval. High. Educ.
2021, 46, 118–131. [CrossRef]
15. Winstone, N.E.; Boud, D. The need to disentangle assessment and feedback in higher education. Stud. High. Educ. 2020, 47,
656–667. [CrossRef]
16. Lui, A.M.; Andrade, H.L. The next black box of formative assessment: A model of the internal mechanisms of feedback processing.
Front. Educ. 2022, 7, 751548. [CrossRef]
17. Quigley, D. When I say . . . feedback literacy. Med. Educ. 2021, 55, 1121–1122. [CrossRef]
18. Rovagnati, V.; Pitt, E.; Winstone, N. Feedback cultures, histories and literacies: International postgraduate students’ experiences.
Assess. Eval. High. Educ. 2021, 47, 347–359. [CrossRef]
19. Peytcheva-Forsyth, R.; Aleksieva, L. Forced introduction of e-assessment during covid-19 pandemic: How did the students feel
about that? (Sofia university case). In Proceedings of the AIP Conference Proceedings, Brisbane, Australia, 6–9 December 2021;
Volume 2333, pp. 1–11. [CrossRef]
20. Kwong, T.; Mui, L.; Wong, E.Y.W. A case study on online learning and digital assessment in times of crisis. World J. Educ. Res.
2020, 7, 44–49. [CrossRef]
Trends High. Educ. 2022, 1 27

21. Alruwais, N.; Wills, G.; Wald, M. Advantages and challenges of using e-assessment. Int. J. Inf. Educ. Technol. 2018, 8, 34–37.
[CrossRef]
22. Akbar, M. Digital technology shaping teaching practices in higher education. Front. ICT 2016, 3. [CrossRef]
23. Lynch, M. E-learning during a global pandemic. Asian J. Distance Educ. 2020, 15, 189–195.
24. Dermo, J. E-assessment and the student learning experience: A survey of student perceptions of e-Assessment. Br. J. Educ.
Technol. 2009, 40, 203–214. [CrossRef]
25. Bulut, O.; Cutumisu, M.; Singh, D.; Aquilina, A.M. Guidelines for generating effective feedback from e-assessments. Hacet. Univ.
J. Educ. 2020, 35, 60–72. [CrossRef]
26. Jordan, S. Student engagement with assessment and feedback: Some lessons from short-answer free-text e-assessment questions.
Comput. Educ. 2012, 58, 818–834. [CrossRef]
27. Chong, S.W. Reconsidering student feedback literacy from an ecological perspective. Assess. Eval. High. Educ. 2021, 46, 92–104.
[CrossRef]
28. Winstone, N.E.; Balloo, K.; Carless, D. Discipline-specific feedback literacies: A framework for curriculum design. High. Educ.
2022, 83, 57–77. [CrossRef]
29. Jensen, L.X.; Bearman, M.; Boud, D. Understanding feedback in online learning—A critical review and metaphor analysis.
Comput. Educ. 2021, 173, 104271. [CrossRef]
30. Molloy, E.; Boud, D.; Henderson, M. Developing a learning-centred framework for feedback literacy. Assess. Eval. High. Educ.
2020, 45, 527–540. [CrossRef]
31. Hounsell, D. Chapter 8: Towards more sustainable feedback to students. In Rethinking Assessment in Higher Education: Learning for
the Longer Term; Boud, D.; Falchikov, N., Eds.; Routledge: London, UK, 2007; pp. 101–113.
32. Lyle, S. Dialogic teaching: Discussing theoretical contexts and reviewing evidence from classroom practice. Lang. Educ. 2008,
22, 222–240. [CrossRef]
33. Sutton, P. Towards dialogic feedback. Crit. Reflective Pract. Educ. 2009, 1, 1–10.
34. Yang, M.; Carless, D. The feedback triangle and the enhancement of dialogic feedback processes. Teach. High. Educ. 2013,
18, 285–297. [CrossRef]
35. Carless, D.; Winstone, N. Teacher feedback literacy and its interplay with student feedback literacy. Teach. High. Educ. 2020, 1–14.
[CrossRef]
36. Dawson, P.; Carless, D.; Lee, P.P.W. Authentic feedback: Supporting learners to engage in disciplinary feedback practices. Assess.
Eval. High. Educ. 2021, 46, 286–296. [CrossRef]
37. Butler, D.L.; Winne, P.H. Feedback and self-regulated learning: A theoretical synthesis. Rev. Educ. Res. 1995, 65, 245–281.
[CrossRef]
38. Malecka, B.; Boud, D. Fostering student motivation and engagement with feedback through ipsative processes. Teach. High. Educ.
2021, 1–16. [CrossRef]
39. Han, Y. Written corrective feedback from an ecological perspective: The interaction between the context and individual learners.
System 2019, 80, 288–303. [CrossRef]
40. To, J. Using learner-centred feedback design to promote students’ engagement with feedback. High. Educ. Res. Dev. 2021, 41,
1309–1324. [CrossRef]
41. Tsai, Y.S.; Mello, R.F.; Jovanović, J.; Gašević, D. Student appreciation of data-driven feedback: A pilot study on Ontask. In
Proceedings of the LAK21: 11th International Learning Analytics and Knowledge Conference, Irvine, CA, USA, 12–16 April 2021;
ACM: New York, NY, USA, 2021; pp. 511–517. [CrossRef]
42. Chen, G.; Rolim, V.; Mello, R.F.; Gašević, D. Let’s shine together! a comparative study between learning analytics and educational
data mining. In Proceedings of the Tenth International Conference on Learning Analytics & Knowledge, Frankfurt am Main,
Germany, 25–27 March 2020; pp. 544–553.
43. Lim, L.A.; Dawson, S.; Gašević, D.; Joksimović, S.; Pardo, A.; Fudge, A.; Gentili, S. Students’ perceptions of, and emotional
responses to, personalised learning analytics-based feedback: An exploratory study of four courses. Assess. Eval. High. Educ.
2021, 46, 339–359. [CrossRef]
44. Pintrich, P.R. Understanding self-regulated learning. New Dir. Teach. Learn. 1995, 1995, 3–12. [CrossRef]
45. Yu, S.; Liu, C. Improving student feedback literacy in academic writing: An evidence-based framework. Assess. Writ. 2021,
48, 1–11. [CrossRef]
46. Panadero, E. A review of self-regulated learning: Six models and four directions for research. Front. Psychol. 2017, 8, 1–28.
[CrossRef] [PubMed]
47. Evans, C. Making sense of assessment feedback in higher education. Rev. Educ. Res. 2013, 83, 70–120. [CrossRef]
48. Fan, Y.; Saint, J.; Singh, S.; Jovanovic, J.; Gašević, D. A learning analytic approach to unveiling self-regulatory processes in
learning tactics. In Proceedings of the LAK21: 11th International Learning Analytics and Knowledge Conference, Irvine, CA,
USA, 12–16 April 2021; ACM: New York, NY, USA, 2021; pp. 184–195. [CrossRef]
49. Forsythe, A.; Johnson, S. Thanks, but no-thanks for the feedback. Assess. Eval. High. Educ. 2017, 42, 850–859. [CrossRef]
50. Han, Y.; Xu, Y. Student feedback literacy and engagement with feedback: A case study of Chinese undergraduate students. Teach.
High. Educ. 2021, 26, 181–196. [CrossRef]
Trends High. Educ. 2022, 1 28

51. Giossos, Y.; Koutsouba, M.; Lionarakis, A.; Skavantzos, K. Reconsidering moore’s transactional distance theory. Eur. J. Open
Distance E-Learn. 2009, 2, 1–6.
52. Prakash, L.S.; Saini, D.K. E-assessment for e-learning. In Proceedings of the 2012 IEEE International Conference on Engineering
Education: Innovative Practices and Future Trends (AICERA), Kottayam, India, 19–21 July 2012; IEEE: Piscataway, NJ, USA, 2012;
pp. 1–6. [CrossRef]
53. Tawafak, R.M.; Romli, A.B.; Alsinani, M. E-learning system of UCOM for improving student assessment feedback in oman higher
education. Educ. Inf. Technol. 2019, 24, 1311–1335. [CrossRef]
54. Guo, S. Synchronous versus asynchronous online teaching of physics during the Covid-19 pandemic. Phys. Educ. 2020, 55, 1–9.
[CrossRef]
55. Bulut, O.; Gorgun, G.; Yildirim-Erbasli, S.N.; Wongvorachan, T.; Daniels, L.M.; Gao, Y.; Lai, K.W.; Shin, J. Standing on the
shoulders of giants: Online formative assessments as the foundation for predictive learning analytics models. Br. J. Educ. Technol.
2022, 1–21. [CrossRef]
56. Steiner, M.; Götz, O.; Stieglitz, S. The influence of learning management system components on learners’ motivation in a
large-scale social learning environment. In Proceedings of the Thirty Fourth International Conference on Information Systems,
Milan, Italy, 15–18 December 2013; pp. 1–20.
57. O’Donnell, F.; Sireci, S.G. Chapter 6: Score reporting issues for licensure, certification, and admission programs. In Score Reporting
Research and Applications; The NCME Applications of Educational Measurement and Assessment; Routledge: London, UK, 2019;
pp. 77–90.
58. Adenuga, K.I.; Mbarika, V.W.; Omogbadegun, Z.O. Technical support: Towards mitigating effects of computer anxiety on
acceptance of e-assessment amongst university students in sub-saharan African countries. In ICT Unbounded, Social Impact of Bright
ICT Adoption; Dwivedi, Y., Ayaburi, E., Boateng, R., Effah, J., Eds.; Series Title: IFIP Advances in Information and Communication
Technology; Springer International Publishing: Berlin/Heidelberg, Germany, 2019; Volume 558, pp. 48–72. [CrossRef]
59. Alruwais, N.; Wills, G.; Wald, M. An evaluation of the model of acceptance of e-assessment among academics in Saudi universities.
Educ. J. 2018, 7, 23–36. [CrossRef]
60. Liu, I.F.; Chen, R.S.; Lu, H.C. An exploration into improving examinees’ acceptance of participation in an online exam. Educ.
Technol. Soc. 2015, 18, 153–165.
61. Wood, J. Making peer feedback work: The contribution of technology-mediated dialogic peer feedback to feedback uptake and
literacy. Assess. Eval. High. Educ. 2021, 47, 327–346. [CrossRef]
62. Ramaswami, M.; Bhaskaran, R. A CHAID based performance prediction model in educational data mining. IJCSI Int. J. Comput.
Sci. 2010. Available online: http://xxx.lanl.gov/abs/1002.1144 (accessed on 3 April 2022).
63. Tsai, Y.S. Why feedback literacy matters for learning analytics. In Proceedings of the 2022 ISLS Annual Meeting the 16th
International Conference of the Learning Sciences (ICLS), Hiroshima, Japan, 21 November 2022; pp. 27–34.
64. Kuklick, L.; Lindner, M.A. Computer-based knowledge of results feedback in different delivery modes: Effects on performance,
motivation, and achievement emotions. Contemp. Educ. Psychol. 2021, 67, 102001. [CrossRef]
65. Chen, L.; Howitt, S.; Higgins, D.; Murray, S. Students’ use of evaluative judgement in an online peer learning community. Assess.
Eval. High. Educ. 2021, 47, 493–506. [CrossRef]
66. Ryan, T.; Henderson, M.; Ryan, K.; Kennedy, G. Identifying the components of effective learner-centred feedback information.
Teach. High. Educ. 2021, 1–18. [CrossRef]
67. Desai, S.; Chin, J. An explorative analysis of the feasibility of implementing metacognitive strategies in self-regulated learning with
the conversational agents. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2020, 64, 495–499. [CrossRef]
68. Pengel, N.; Martin, A.; Meissner, R.; Arndt, T.; Neumann, A.T.; de Lange, P.; Wollersheim, H.W. TecCoBot: Technology-aided
support for self-regulated learning. arXiv 2021, arXiv:2111.11881.
69. Winstone, N.E.; Mathlin, G.; Nash, R.A. Building feedback literacy: Students’ perceptions of the developing engagement with
feedback toolkit. Front. Educ. 2019, 4, 39. [CrossRef]
70. Jonsson, A. Facilitating productive use of feedback in higher education. Act. Learn. High. Educ. 2012, 14, 63–76. [CrossRef]
71. Klein, C.; Lester, J.; Nguyen, T.; Justen, A.; Rangwala, H.; Johri, A. Student sensemaking of learning analytics dashboard
interventions in higher education. J. Educ. Technol. Syst. 2019, 48, 130–154. [CrossRef]
72. Malecka, B.; Boud, D.; Carless, D. Eliciting, processing and enacting feedback: Mechanisms for embedding student feedback
literacy within the curriculum. Teach. High. Educ. 2020, 27, 908–922. [CrossRef]
73. Jivet, I.; Wong, J.; Scheffel, M.; Valle Torre, M.; Specht, M.; Drachsler, H. Quantum of choice: How learners’ feedback monitoring
decisions, goals and self-regulated learning skills are related. In Proceedings of the LAK21: 11th International Learning Analytics
and Knowledge Conference, Irvine, CA, USA, 12–16 April 2021; ACM: New York, NY, USA, 2021; pp. 416–427. [CrossRef]
74. Apampa, K.M.; Wills, G.; Argles, D. User security issues in summative e-assessment security. Int. J. Digit. Soc. (IJDS) 2010, 1,
1–13. [CrossRef]
75. Reidenberg, J.R.; Schaub, F. Achieving big data privacy in education. Theory Res. Educ. 2018, 16, 263–279. [CrossRef]
76. Glassey, J.; Russo Abegao, F. E-assessment and tailored feedback-are they contributing to the effectiveness of chemical engineering
education? In Proceedings of the 2017 7th World Engineering Education Forum (WEEF), Kuala Lumpur, Malaysia, 13–16
November 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 508–512. [CrossRef]
Trends High. Educ. 2022, 1 29

77. Hunt, P.; Leijen, A.; van der Schaaf, M. Automated feedback is nice and human presence makes it better: Teachers’ perceptions of
feedback by means of an e-portfolio enhanced with learning analytics. Educ. Sci. 2021, 11, 278. [CrossRef]
78. Fiebrink, R.; Gillies, M. Introduction to the special issue on human-centered machine learning. ACM Trans. Interact. Intell. Syst.
(TIIS) 2018, 8, 1–7. [CrossRef]
79. Khanna, M.M. Ungraded pop quizzes: Test-enhanced learning without all the anxiety. Teach. Psychol. 2015, 42, 174–178.
[CrossRef]
80. Kulasegaram, K.; Rangachari, P.K. Beyond “formative”: Assessments to enrich student learning. Adv. Physiol. Educ. 2018,
42, 5–14. [CrossRef] [PubMed]
81. Appiah, M.; Van Tonder, F. E-Assessment in Higher Education: A Review. Int. J. Bus. Manag. Econ. Res. 2018, 9,1454–1460.
82. Ma, M.; Wang, C.; Teng, M.F. Using learning-oriented online assessment to foster students’ feedback literacy in l2 writing
during Covid-19 pandemic: A case of misalignment between micro- and macro-contexts. Asia-Pac. Educ. Res. 2021, 30, 597–609.
[CrossRef]
83. Bowen, L.; Marshall, M.; Murdoch-Eaton, D. Medical student perceptions of feedback and feedback behaviors within the context
of the “educational alliance”. Acad. Med. 2017, 92, 1303–1312. [CrossRef]
84. Aldriye, H.; Alkhalaf, A.; Alkhalaf, M. Automated grading systems for programming assignments: A literature review. Int. J.
Adv. Comput. Sci. Appl. 2019, 10, 215–222. [CrossRef]

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy