higheredu-01-00002
higheredu-01-00002
higheredu-01-00002
1 Measurement, Evaluation, and Data Science, University of Alberta, Edmonton, AB T6G 2G5, Canada
2 Centre for Research in Applied Measurement and Evaluation, University of Alberta,
Edmonton, AB T6G 2G5, Canada
3 Department of Human Centred Computing, Monash University, Clayton, VIC 3800, Australia
4 IPN-Leibniz Institute for Science and Mathematics Education, University of Kiel, 24118 Kiel, Germany
* Correspondence: wongvora@ualberta.ca
Abstract: For students, feedback received from their instructors can make a big difference in their
learning by translating their assessment performance into future learning opportunities. To date,
researchers have proposed various feedback literacy frameworks, which concern one’s ability to
interpret and use feedback for their learning, to promote students’ feedback engagement by reposi-
tioning them as active participants in the learning process. However, the current feedback literacy
frameworks have not been adapted to digital or e-Assessment settings despite the increasing use of
e-Assessments (e.g., computer-based tests, intelligent tutoring systems) in practice. To address this
gap, this conceptual paper introduces a feedback literacy model in the context of e-Assessments to
present an intersection between e-Assessment features and the ecological model of feedback literacy
for more effective feedback practices in digital learning environments. This paper could serve as
a guideline to improve feedback effectiveness and its perceived value in e-Assessment to enhance
student feedback literacy.
Citation: Wongvorachan, T.; Bulut, Keywords: feedback literacy; e-Assessment; educational feedback
O.; Tsai, Y.-S.; Lindner, M.A.
Improving Student Feedback Literacy
in e-Assessments: A Framework for
the Higher Education Context. Trends 1. Introduction
High. Educ. 2022, 1, 16–29. https://
In education, feedback is a crucial element that translates student performance in-
doi.org/10.3390/higheredu1010002
formation into a learning opportunity [1]. Formally defined, feedback is a process where
Academic Editor: Janika Leoste learners make sense of various performance-related information to inform their learn-
Received: 7 November 2022
ing development [2,3]. Feedback varies upon the purpose of assessment, with formative
Accepted: 2 December 2022
feedback as a part of the continual learning process (i.e., assessment for learning) and
Published: 6 December 2022
summative feedback as the indicator of learners’ success (i.e., assessment of learning) [4].
This paper focuses on the formative use of feedback, since we intend to regard feedback as
Publisher’s Note: MDPI stays neutral
an iterative process between educational actors (e.g., teachers and students). Serving its
with regard to jurisdictional claims in
purpose, feedback can instill changes in students at four levels: (1) task level—the level
published maps and institutional affil-
that informs the correctness of students’ response; (2) process level—the level that informs
iations.
the thought process to perform the task; (3) self-regulated learning level—the level that
concerns students’ in their monitoring of action; (4) self level—the level that concerns
personal characteristic of students themselves [5]. To date, most of the research studies on
Copyright: © 2022 by the authors.
feedback examine input of the process (e.g., types of feedback and mode/timing of delivery,
Licensee MDPI, Basel, Switzerland. see Mertens et al. [6] for a meta-analysis), but few studies examine the other end: output
This article is an open access article (e.g., feedback interpretation and use) [7]. To promote feedback practice, more research
distributed under the terms and is needed further down the feedback loop, such as feedback appreciation, engagement,
conditions of the Creative Commons effects, and use; most of which situate under the term “feedback literacy” [8–10].
Attribution (CC BY) license (https:// To date, the current feedback practice is still sub-optimal in terms of student satis-
creativecommons.org/licenses/by/ faction, engagement, and use of feedback due to its reliance on the transmission-centred
4.0/). framework, which views feedback as static information that goes one way from teachers to
students [11,12]. Based on this framework, students’ role is perceived only as processing,
while teachers position themselves as the higher actor in the process [2,13,14]. Additionally,
teachers using this feedback approach may be over-reliance on generic-formatted feedback,
which could further reduce student engagement because they could perceive feedback as
irrelevant [15]. However, recent literature on feedback argues that feedback processing is
an interactive and cyclical process between students, teachers, and the feedback itself [16].
This interactive approach to feedback could lead to a positive learning experience, en-
hanced competency, and improved reflective practice in students; however, the literature
has not been contextualized to the changing nature of learning and teaching [17,18]. In
practice, despite the abrupt change in the mode of teaching since during the onset of the
COVID-19 pandemic (i.e., from in-person to online), teachers were found to struggle with
their provision of teaching and feedback in online environments; this setback could be
attributed to the lack of feedback guidelines in the e-Assessment context [19].
Beyond the traditional written forms of assessment, e-Assessment (hereafter, we will
use the term e-Assessment as an alternative to digital assessment) has gained widespread
popularity because of the rapid rate of technological and digital advancement and the
urgent need for online learning as a result of recent COVID-19 disruptions [7,20]. e-
Assessment is a component of technology-assisted classroom environment (or e-learning)
that gains advantages over traditional assessment in terms of automation, digitization, and
timeliness [20,21]. This innovative assessment format is prevalent in modern education
as it allows educators to handle a large number of examinees, offer virtual classes over a
long distance, and utilize a variety of teaching materials for assessment administration and
feedback delivery [6,22]. e-Assessment is adopted in both K-12 and higher education, but
our primary focus is on higher education to illustrate the literacy framework through our
context of interest [23].
Several institutions of higher education adopt learning management systems (LMS)
such as Blackboard™ and E-class to deliver e-Assessment to students. Age-wise, the e-
Assessment is mostly used by undergraduate students (18 years old and above), but age
was found to have no significant effect on the perceived utility of the assessment as its
feedback [4,24]. The adoption of an e-Assessment allows educators to take advantage of
the e-Assessment, such as its automation and the leveraging of a data-rich environment.
While this advantage is convenient, e-Assessment could be further improved in terms of
perceived usefulness and acceptance from students. Thus, this context has become our
focus as an effort for the betterment of e-Assessment practice [25,26].
In this conceptual paper, we apply the existing feedback literacy framework from
Carless and Boud [3] and Chong [27] to the e-Assessment context as a focused guideline
for improved feedback practice. The establishment of a context-specific application could
also allow instructors to leverage the benefit of e-Assessment to support students in their
development of feedback literacy through the use of e-Assessment distinctive features such
as its digital environment [21]. Furthermore, the use of the context-specific application
in feedback literacy could make the intervention from instructors (e.g., consultation to
students about feedback usage) more relevant to students’ situated context than the usage
of the general feedback literacy model [28]. This paper begins with a discussion of feedback
literacy models, as well as their relevance to self-regulated learning. Then, we move our
discussion toward the dimension of feedback literacy in the e-Assessment setting and
conclude the paper with suggestions to enhance feedback literacy in the mentioned context.
should commit to their understanding of feedback purpose, their position in the feedback
loop, and be aware of the specific aspect of feedback that would inform their development
by actively processing feedback [30]. At the same time, instructors should perceive feedback
as a part of the learning process instead of a pedagogical chore to foster students’ feedback
literacy [31]. In fact, instructors should promote conversation and collaboration between
educational actors (i.e., student with student, student with instructors) [32,33]. This type of
practice could promote feedback literacy by fostering learner agency and self-regulation,
which are essential to the effectiveness of feedback [2,14,34]. On the basis of the general
feedback literacy concept, this section discusses the original model of Carless and Boud [3]
and the extended ecological model of Chong [27].
Figure 1. The Original Feedback Literacy Model by Carless and Boud [3].
the individual dimension is divided into beliefs, goals, learner, experience, and abilities
aspects [27]. Chong [27]’s framework views students as an organism that interacts with
their surrounding environment, hence the name ecological. In this model, student engage-
ment to feedback is tied to their learning context as informed by socio-cultural theories [27].
This notion aligns with self-regulated learning, which will be discussed in more detail in
the following section. In addition to the engagement dimension of the original model,
contextual dimension—the sociocultural factor surrounding the learners themselves (e.g.,
feedback feature, relationship with teachers, influence from teaching style, and other socio-
cultural factors)— and individual dimension—factors about individual differences (e.g.,
feedback-related belief/experience, learning goal, and student ability of the subject)— are
added [27,39]. The influence of the three dimensions on students’ feedback uptake could
be further supported by the mechanism of feedback processing by Lui and Andrade [16],
which posits that for students to exhibit a behavioural response to feedback, characteristics
of the feedback itself, students’ motivation to feedback, and students’ response to feedback
in terms of emotion, interpretation, and decision play their own part in feedback processing.
In this extended model, the ecological factors of contextual and individual dimensions
affect the development of the engagement dimension in feedback literacy by shaping how
students process their feedback. For that, students from different institutions may handle
feedback differently due to their contextual differences (e.g., curriculum and educational
norm) [18]. More specifically, the feedback should take student context (i.e., external
factors) and characteristics (i.e., internal factors) into account, as both factors could affect
how students experience their feedback [40,41]. The alignment of all three dimensions is
also crucial in mediating the development of feedback literacy [27]. For example, students’
tendency to put their feedback to use could be affected by adequate feedback understanding,
which can be influenced by the ability to really judge their own work based on data from
learning analytics (leveraging of students’ learning data to inform learning and teaching,
including feedback provision [42]), and the ability to positively manage their feedback-
related emotions resulted from the perceived care and respect of their instructors [43].
In this example, learning analytics data can be seen as a contextual influence, whereas
perceived care and respect are individual influences.
motivation, and ability to change) and external factors (e.g., the interaction between ac-
tors, feedback feature, and culture) to promote student self-regulation [18,40,41,47]. This
notion aligns with the ecological feedback literacy model that puts forth the importance
of individual and contextual dimensions to feedback literacy. Goal setting, persistence,
invested effort, self-efficacy, and learning tactics are the most significant among predictors
of student self-regulated learning [27,46,48]. Learning strategies, in particular, are very
relevant to the e-Assessment context as they can be determined from trace data through
learning analytic techniques [48]. Additionally, goal orientation was found to shape realistic
students’ understanding of feedback, thus, reducing maladaptive defense mechanisms in
the process [49].
Regarding the external factors, quality feedback can empower students’ feedback
literacy, which extends to student self-regulation by raising their awareness of strengths
and weaknesses to inform their cognition, behavioral, and motivational/affection aspects
in learning. This could lead to autonomy in self-improvement and the internalization of
feedback as students are able to make evaluative judgments for themselves [9,31,44]. The
discrepancy between the learner’s expected progress towards their goals and the actual
progress made could also lead to different affective reactions that impact students’ learning.
Thus, being aware of strengths and weaknesses could reduce the discrepancy between
students’ expected progress and their actual progress, which could be helpful in their
learning as they know where they are in terms of class performance, and what they need to
do to attain the expected learning outcomes [44].
The aforementioned ideas show that feedback and self-regulation have a reciprocal
relationship with each other—effective feedback enhances student self-regulation and vice
versa [50]. More specifically, feedback literacy can be developed through self-regulation by
enhancing student ability to understand their feedback and evaluate their own work to
inform their improvement strategy [35,41]. In turn, the improved understanding and use
of external feedback can serve to aid students in self-monitoring to adjust their learning to
better align with their goals [37,46].
Figure 2. Dimensions of Feedback Literacy in the e-Assessment Context (The green portion of the
figure is derived from the original feedback literacy model [3]. The yellow and blue portions of the
figure are derived from the extended ecological feedback literacy model [27]. The boxes represent the
contribution of the present framework in the e-Assessment context).
For the Contextual Dimension that focuses on external factors of the students, inter-
personal level context can be attributed to the use of computer-mediated communication
technology such as the integrated discussion forum or email as a conduit with the in-
structors for collaborative learning, peer feedback, and establishing trust [52]. Students
could improve their learning through the use of a built-in communication function in an
e-learning environment to communicate with their peers [53]. For textual level context,
the usage of digital feedback tools, such as ExamVis—a digital score reporting software or
OnTask—a learning analytics-based feedback generation software can provide feedback con-
tent based on students’ self-efficacy and self-regulation as computed via process data [8,41].
For instructional level context, the synchronicity (i.e., synchronous vs asynchronous) of
the course that is taught with an e-learning platform could affect the perceived difficulty
of the students to the course, including how they use their feedback as well because the
absence of an instructor in asynchronous courses may make the lecture unable to adapt to
students’ needs (e.g., answering questions, and adjusting the lecture pace) [54].
For the Engagement Dimension that is modeled after Carless and Boud [3] frame-
work, the feedback appreciation part of student cognitive engagement to feedback can be
related to the use of personalized or learning analytics-based feedback to make the content
more relatable to students such as the usage of students’ formative assessment data and
their time-use in feedback [8,41,55]. Instructors can also use innovative modes of feedback
such as interactive graph data (see Bulut et al. [8] for an example on ExamVis) and audio-
based feedback through e-learning platforms to digitally for feedback revisitation [3,25].
It is worth noting that the feedback appreciation part of the cognitive engagement as-
pect partly overlaps with the contextual dimension; for example, the usage of innovative
modes of feedback (e.g., interactive dashboard, video-based feedback) and personalized
information include both the contextual dimension and the feedback appreciation aspect
Trends High. Educ. 2022, 1 22
of the cognitive engagement dimension, because the characteristic of feedback itself can
be considered as external influences on student understanding of feedback. The making
judgment part of the cognitive engagement model can be related to the use of self-test
features that are usually available via LMS for students to evaluate themselves based on the
assessment criteria as they engage in their practice quizzes [56]. Regarding student affec-
tive engagement that concerns how students manage their emotional states resulting from
feedback, the tone of the feedback report issued from the e-Assessment can be influential
to how well students receive the message that houses information for their improvement;
for example, constructive-but-harsh feedback may be met with resistance from students
while feedback with caring tone could increase students’ engagement with the feedback
itself [3,57]. Moreover, students’ anxiety about e-Assessment in terms of usage can also
influence how well they perform on their test, including how much trust they put into the
feedback of the platform as well [58].
For the Individual Dimension that concerns individual differences to feedback, the
perceived usefulness of e-Assessment is a context-exclusive element that is related to
student belief in their feedback. In particular, the perceived usefulness of the e-Assessment
platform could influence student attitudes toward the platform, including their subsequent
intention to use feedback [58,59]. In addition, students’ expectation of e-Assessment in
terms of its advantage over the traditional assessment could affect their intention and
action in response to feedback [24]. Students who believe that e-Assessment is an effective
means of measuring their performance could also believe that results from the test are
accurate representations of their abilities, which in turn affect their tendency to use the
feedback as well. In terms of feedback-related experience, student familiarity with the
e-Assessment platform could also affect their trust in the precision of results provided
through the platform as they know how the platform works, which in turn affects their
tendency to act on feedback [21]. The perceived ease of use, perceived usefulness, and
the compatibility between test item features with academic tasks (e.g., calculation-type
items for mathematics course so that students can express their knowledge in mathematics
operation, or drag-and-drop sentence completion items for language course) are also related
to student acceptance of e-Assessment [58,59]. Lastly, the security of the test platform, such
as smart exam monitoring software or examine identification software, can also influence
the perceived credibility of the feedback as well [60].
pathetic language and encouragement in their feedback, especially with those who fail
the exam, to foster student acceptance of the feedback [25,57]. Furthermore, emotional
resistance to feedback is natural due to the complexity of human communication; therefore,
to foster student feedback literacy, it is important to teach them how to acknowledge,
accept, and manage their emotional state [9,36,40,49]. For example, instructors could com-
municate to students early in their study about feedback-associated emotions (e.g., how
direct critique might be off-putting but necessary) [36].
Technology-wise, instructors, and students should be trained to be familiar with the
e-Assessment before the actual implementation to increase their confidence in using the
platform [21]. Timely technical support during an e-Assessment session is also crucial in
reducing student computer anxiety that could affect their true performance in the test and
its resulting feedback [58]. Lastly, the security of the e-Assessment should be monitored
to ensure that the exam is rigorous to uphold the credibility of the scores that could affect
student acceptance in feedback [60]. This issue could be addressed with technologies such
as identity verification systems to ensure that students’ data is appropriately used and
kept [74]. Moreover, institutions should inform students of which information is being
collected to establish their trust in data privacy, which could maintain their trust in the
system and its provided feedback [75].
(e.g., printing cost, the lack of automation, low test security) allows e-Assessment to be
popular among teachers as well [81]. The adoption of e-Assessment is also effective in
facilitating the development of most aspects of student feedback literacy (i.e., appreciating
feedback, making judgments, and taking actions) aside from the aspect of managing effect
and, therefore, elevating the quality of students’ learning as a whole [82]. To this end,
the proposed ideas and guidelines in this paper could further enhance student feedback
literacy as a whole and increase feedback use in addition to the existing practice.
Author Contributions: Conceptualization, T.W. and O.B.; writing—original draft preparation, T.W.;
writing—review and editing, T.W., O.B., Y.-S.T. and M.A.L.; resources, Y.-S.T., M.A.L. and O.B.;
supervision, O.B. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable as this study did not involve humans or animals.
Informed Consent Statement: Not applicable as this study did not involve human participants.
Data Availability Statement: Not applicable as this study did not use or report any data.
Acknowledgments: We would like to acknowledge the contribution of Ka-Wing Lai for her contribu-
tions to the initial development of this project and the design of the visual component of this paper.
Conflicts of Interest: The authors declare no conflict of interest in the writing of the manuscript.
Abbreviations
The following abbreviations are used in this manuscript:
References
1. Watling, C.J.; Ginsburg, S. Assessment, feedback and the alchemy of learning. Med. Educ. 2019, 53, 76–85. [CrossRef] [PubMed]
2. Boud, D. Challenges for reforming assessment: The next decade. In Proceedings of the International Virtual Meeting: Teaching,
Learning & Assessment in Higher Education, Iasi, Romania, 22 November–1 December 2021.
3. Carless, D.; Boud, D. The development of student feedback literacy: Enabling uptake of feedback. Assess. Eval. High. Educ. 2018,
43, 1315–1325. [CrossRef]
4. Marriott, P.; Teoh, L.K. Using screencasts to enhance assessment feedback: Students’ perceptions and preferences. Account. Educ.
2012, 21, 583–598. [CrossRef]
5. Hattie, J.; Timperley, H. The power of feedback. Rev. Educ. Res. 2007, 77, 81–112. [CrossRef]
6. Mertens, U.; Finn, B.; Lindner, M.A. Effects of computer-based feedback on lower-and higher-order learning outcomes: A network
meta-analysis. J. Educ. Psychol. 2022, 114, 1743. [CrossRef]
7. Jiao, H. Enhancing students’ engagement in learning through a formative e-assessment tool that motivates students to take action
on feedback. Australas. J. Eng. Educ. 2015, 20, 9–18. [CrossRef]
8. Bulut, O.; Cutumisu, M.; Aquilina, A.M.; Singh, D. Effects of digital score reporting and feedback on students’ learning in higher
education. Front. Educ. 2019, 4, 65. [CrossRef]
9. Carless, D. Feedback loops and the longer-term: Towards feedback spirals. Assess. Eval. High. Educ. 2019, 44, 705–714. [CrossRef]
10. Daniels, L.M.; Bulut, O. Students’ perceived usefulness of computerized percentage-only vs. descriptive score reports: Associa-
tions with motivation and grades. J. Comput. Assist. Learn. 2020, 36, 199–208. [CrossRef]
11. Tripodi, N.; Feehan, J.; Wospil, R.; Vaughan, B. Twelve tips for developing feedback literacy in health professions learners. Med.
Teach. 2021, 43, 960–965. [CrossRef]
12. Winstone, N.E.; Carless, D. Who is feedback for? The influence of accountability and quality assurance agendas on the enactment
of feedback processes. Assess. Educ. Princ. Policy Pract. 2021, 28, 261–278. [CrossRef]
13. Matthews, K.E.; Tai, J.; Enright, E.; Carless, D.; Rafferty, C.; Winstone, N. Transgressing the boundaries of ‘students as partners’
and ‘feedback’ discourse communities to advance democratic education. Teach. High. Educ. 2021, 1–15. [CrossRef]
14. Winstone, N.E.; Pitt, E.; Nash, R. Educators’ perceptions of responsibility-sharing in feedback processes. Assess. Eval. High. Educ.
2021, 46, 118–131. [CrossRef]
15. Winstone, N.E.; Boud, D. The need to disentangle assessment and feedback in higher education. Stud. High. Educ. 2020, 47,
656–667. [CrossRef]
16. Lui, A.M.; Andrade, H.L. The next black box of formative assessment: A model of the internal mechanisms of feedback processing.
Front. Educ. 2022, 7, 751548. [CrossRef]
17. Quigley, D. When I say . . . feedback literacy. Med. Educ. 2021, 55, 1121–1122. [CrossRef]
18. Rovagnati, V.; Pitt, E.; Winstone, N. Feedback cultures, histories and literacies: International postgraduate students’ experiences.
Assess. Eval. High. Educ. 2021, 47, 347–359. [CrossRef]
19. Peytcheva-Forsyth, R.; Aleksieva, L. Forced introduction of e-assessment during covid-19 pandemic: How did the students feel
about that? (Sofia university case). In Proceedings of the AIP Conference Proceedings, Brisbane, Australia, 6–9 December 2021;
Volume 2333, pp. 1–11. [CrossRef]
20. Kwong, T.; Mui, L.; Wong, E.Y.W. A case study on online learning and digital assessment in times of crisis. World J. Educ. Res.
2020, 7, 44–49. [CrossRef]
Trends High. Educ. 2022, 1 27
21. Alruwais, N.; Wills, G.; Wald, M. Advantages and challenges of using e-assessment. Int. J. Inf. Educ. Technol. 2018, 8, 34–37.
[CrossRef]
22. Akbar, M. Digital technology shaping teaching practices in higher education. Front. ICT 2016, 3. [CrossRef]
23. Lynch, M. E-learning during a global pandemic. Asian J. Distance Educ. 2020, 15, 189–195.
24. Dermo, J. E-assessment and the student learning experience: A survey of student perceptions of e-Assessment. Br. J. Educ.
Technol. 2009, 40, 203–214. [CrossRef]
25. Bulut, O.; Cutumisu, M.; Singh, D.; Aquilina, A.M. Guidelines for generating effective feedback from e-assessments. Hacet. Univ.
J. Educ. 2020, 35, 60–72. [CrossRef]
26. Jordan, S. Student engagement with assessment and feedback: Some lessons from short-answer free-text e-assessment questions.
Comput. Educ. 2012, 58, 818–834. [CrossRef]
27. Chong, S.W. Reconsidering student feedback literacy from an ecological perspective. Assess. Eval. High. Educ. 2021, 46, 92–104.
[CrossRef]
28. Winstone, N.E.; Balloo, K.; Carless, D. Discipline-specific feedback literacies: A framework for curriculum design. High. Educ.
2022, 83, 57–77. [CrossRef]
29. Jensen, L.X.; Bearman, M.; Boud, D. Understanding feedback in online learning—A critical review and metaphor analysis.
Comput. Educ. 2021, 173, 104271. [CrossRef]
30. Molloy, E.; Boud, D.; Henderson, M. Developing a learning-centred framework for feedback literacy. Assess. Eval. High. Educ.
2020, 45, 527–540. [CrossRef]
31. Hounsell, D. Chapter 8: Towards more sustainable feedback to students. In Rethinking Assessment in Higher Education: Learning for
the Longer Term; Boud, D.; Falchikov, N., Eds.; Routledge: London, UK, 2007; pp. 101–113.
32. Lyle, S. Dialogic teaching: Discussing theoretical contexts and reviewing evidence from classroom practice. Lang. Educ. 2008,
22, 222–240. [CrossRef]
33. Sutton, P. Towards dialogic feedback. Crit. Reflective Pract. Educ. 2009, 1, 1–10.
34. Yang, M.; Carless, D. The feedback triangle and the enhancement of dialogic feedback processes. Teach. High. Educ. 2013,
18, 285–297. [CrossRef]
35. Carless, D.; Winstone, N. Teacher feedback literacy and its interplay with student feedback literacy. Teach. High. Educ. 2020, 1–14.
[CrossRef]
36. Dawson, P.; Carless, D.; Lee, P.P.W. Authentic feedback: Supporting learners to engage in disciplinary feedback practices. Assess.
Eval. High. Educ. 2021, 46, 286–296. [CrossRef]
37. Butler, D.L.; Winne, P.H. Feedback and self-regulated learning: A theoretical synthesis. Rev. Educ. Res. 1995, 65, 245–281.
[CrossRef]
38. Malecka, B.; Boud, D. Fostering student motivation and engagement with feedback through ipsative processes. Teach. High. Educ.
2021, 1–16. [CrossRef]
39. Han, Y. Written corrective feedback from an ecological perspective: The interaction between the context and individual learners.
System 2019, 80, 288–303. [CrossRef]
40. To, J. Using learner-centred feedback design to promote students’ engagement with feedback. High. Educ. Res. Dev. 2021, 41,
1309–1324. [CrossRef]
41. Tsai, Y.S.; Mello, R.F.; Jovanović, J.; Gašević, D. Student appreciation of data-driven feedback: A pilot study on Ontask. In
Proceedings of the LAK21: 11th International Learning Analytics and Knowledge Conference, Irvine, CA, USA, 12–16 April 2021;
ACM: New York, NY, USA, 2021; pp. 511–517. [CrossRef]
42. Chen, G.; Rolim, V.; Mello, R.F.; Gašević, D. Let’s shine together! a comparative study between learning analytics and educational
data mining. In Proceedings of the Tenth International Conference on Learning Analytics & Knowledge, Frankfurt am Main,
Germany, 25–27 March 2020; pp. 544–553.
43. Lim, L.A.; Dawson, S.; Gašević, D.; Joksimović, S.; Pardo, A.; Fudge, A.; Gentili, S. Students’ perceptions of, and emotional
responses to, personalised learning analytics-based feedback: An exploratory study of four courses. Assess. Eval. High. Educ.
2021, 46, 339–359. [CrossRef]
44. Pintrich, P.R. Understanding self-regulated learning. New Dir. Teach. Learn. 1995, 1995, 3–12. [CrossRef]
45. Yu, S.; Liu, C. Improving student feedback literacy in academic writing: An evidence-based framework. Assess. Writ. 2021,
48, 1–11. [CrossRef]
46. Panadero, E. A review of self-regulated learning: Six models and four directions for research. Front. Psychol. 2017, 8, 1–28.
[CrossRef] [PubMed]
47. Evans, C. Making sense of assessment feedback in higher education. Rev. Educ. Res. 2013, 83, 70–120. [CrossRef]
48. Fan, Y.; Saint, J.; Singh, S.; Jovanovic, J.; Gašević, D. A learning analytic approach to unveiling self-regulatory processes in
learning tactics. In Proceedings of the LAK21: 11th International Learning Analytics and Knowledge Conference, Irvine, CA,
USA, 12–16 April 2021; ACM: New York, NY, USA, 2021; pp. 184–195. [CrossRef]
49. Forsythe, A.; Johnson, S. Thanks, but no-thanks for the feedback. Assess. Eval. High. Educ. 2017, 42, 850–859. [CrossRef]
50. Han, Y.; Xu, Y. Student feedback literacy and engagement with feedback: A case study of Chinese undergraduate students. Teach.
High. Educ. 2021, 26, 181–196. [CrossRef]
Trends High. Educ. 2022, 1 28
51. Giossos, Y.; Koutsouba, M.; Lionarakis, A.; Skavantzos, K. Reconsidering moore’s transactional distance theory. Eur. J. Open
Distance E-Learn. 2009, 2, 1–6.
52. Prakash, L.S.; Saini, D.K. E-assessment for e-learning. In Proceedings of the 2012 IEEE International Conference on Engineering
Education: Innovative Practices and Future Trends (AICERA), Kottayam, India, 19–21 July 2012; IEEE: Piscataway, NJ, USA, 2012;
pp. 1–6. [CrossRef]
53. Tawafak, R.M.; Romli, A.B.; Alsinani, M. E-learning system of UCOM for improving student assessment feedback in oman higher
education. Educ. Inf. Technol. 2019, 24, 1311–1335. [CrossRef]
54. Guo, S. Synchronous versus asynchronous online teaching of physics during the Covid-19 pandemic. Phys. Educ. 2020, 55, 1–9.
[CrossRef]
55. Bulut, O.; Gorgun, G.; Yildirim-Erbasli, S.N.; Wongvorachan, T.; Daniels, L.M.; Gao, Y.; Lai, K.W.; Shin, J. Standing on the
shoulders of giants: Online formative assessments as the foundation for predictive learning analytics models. Br. J. Educ. Technol.
2022, 1–21. [CrossRef]
56. Steiner, M.; Götz, O.; Stieglitz, S. The influence of learning management system components on learners’ motivation in a
large-scale social learning environment. In Proceedings of the Thirty Fourth International Conference on Information Systems,
Milan, Italy, 15–18 December 2013; pp. 1–20.
57. O’Donnell, F.; Sireci, S.G. Chapter 6: Score reporting issues for licensure, certification, and admission programs. In Score Reporting
Research and Applications; The NCME Applications of Educational Measurement and Assessment; Routledge: London, UK, 2019;
pp. 77–90.
58. Adenuga, K.I.; Mbarika, V.W.; Omogbadegun, Z.O. Technical support: Towards mitigating effects of computer anxiety on
acceptance of e-assessment amongst university students in sub-saharan African countries. In ICT Unbounded, Social Impact of Bright
ICT Adoption; Dwivedi, Y., Ayaburi, E., Boateng, R., Effah, J., Eds.; Series Title: IFIP Advances in Information and Communication
Technology; Springer International Publishing: Berlin/Heidelberg, Germany, 2019; Volume 558, pp. 48–72. [CrossRef]
59. Alruwais, N.; Wills, G.; Wald, M. An evaluation of the model of acceptance of e-assessment among academics in Saudi universities.
Educ. J. 2018, 7, 23–36. [CrossRef]
60. Liu, I.F.; Chen, R.S.; Lu, H.C. An exploration into improving examinees’ acceptance of participation in an online exam. Educ.
Technol. Soc. 2015, 18, 153–165.
61. Wood, J. Making peer feedback work: The contribution of technology-mediated dialogic peer feedback to feedback uptake and
literacy. Assess. Eval. High. Educ. 2021, 47, 327–346. [CrossRef]
62. Ramaswami, M.; Bhaskaran, R. A CHAID based performance prediction model in educational data mining. IJCSI Int. J. Comput.
Sci. 2010. Available online: http://xxx.lanl.gov/abs/1002.1144 (accessed on 3 April 2022).
63. Tsai, Y.S. Why feedback literacy matters for learning analytics. In Proceedings of the 2022 ISLS Annual Meeting the 16th
International Conference of the Learning Sciences (ICLS), Hiroshima, Japan, 21 November 2022; pp. 27–34.
64. Kuklick, L.; Lindner, M.A. Computer-based knowledge of results feedback in different delivery modes: Effects on performance,
motivation, and achievement emotions. Contemp. Educ. Psychol. 2021, 67, 102001. [CrossRef]
65. Chen, L.; Howitt, S.; Higgins, D.; Murray, S. Students’ use of evaluative judgement in an online peer learning community. Assess.
Eval. High. Educ. 2021, 47, 493–506. [CrossRef]
66. Ryan, T.; Henderson, M.; Ryan, K.; Kennedy, G. Identifying the components of effective learner-centred feedback information.
Teach. High. Educ. 2021, 1–18. [CrossRef]
67. Desai, S.; Chin, J. An explorative analysis of the feasibility of implementing metacognitive strategies in self-regulated learning with
the conversational agents. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2020, 64, 495–499. [CrossRef]
68. Pengel, N.; Martin, A.; Meissner, R.; Arndt, T.; Neumann, A.T.; de Lange, P.; Wollersheim, H.W. TecCoBot: Technology-aided
support for self-regulated learning. arXiv 2021, arXiv:2111.11881.
69. Winstone, N.E.; Mathlin, G.; Nash, R.A. Building feedback literacy: Students’ perceptions of the developing engagement with
feedback toolkit. Front. Educ. 2019, 4, 39. [CrossRef]
70. Jonsson, A. Facilitating productive use of feedback in higher education. Act. Learn. High. Educ. 2012, 14, 63–76. [CrossRef]
71. Klein, C.; Lester, J.; Nguyen, T.; Justen, A.; Rangwala, H.; Johri, A. Student sensemaking of learning analytics dashboard
interventions in higher education. J. Educ. Technol. Syst. 2019, 48, 130–154. [CrossRef]
72. Malecka, B.; Boud, D.; Carless, D. Eliciting, processing and enacting feedback: Mechanisms for embedding student feedback
literacy within the curriculum. Teach. High. Educ. 2020, 27, 908–922. [CrossRef]
73. Jivet, I.; Wong, J.; Scheffel, M.; Valle Torre, M.; Specht, M.; Drachsler, H. Quantum of choice: How learners’ feedback monitoring
decisions, goals and self-regulated learning skills are related. In Proceedings of the LAK21: 11th International Learning Analytics
and Knowledge Conference, Irvine, CA, USA, 12–16 April 2021; ACM: New York, NY, USA, 2021; pp. 416–427. [CrossRef]
74. Apampa, K.M.; Wills, G.; Argles, D. User security issues in summative e-assessment security. Int. J. Digit. Soc. (IJDS) 2010, 1,
1–13. [CrossRef]
75. Reidenberg, J.R.; Schaub, F. Achieving big data privacy in education. Theory Res. Educ. 2018, 16, 263–279. [CrossRef]
76. Glassey, J.; Russo Abegao, F. E-assessment and tailored feedback-are they contributing to the effectiveness of chemical engineering
education? In Proceedings of the 2017 7th World Engineering Education Forum (WEEF), Kuala Lumpur, Malaysia, 13–16
November 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 508–512. [CrossRef]
Trends High. Educ. 2022, 1 29
77. Hunt, P.; Leijen, A.; van der Schaaf, M. Automated feedback is nice and human presence makes it better: Teachers’ perceptions of
feedback by means of an e-portfolio enhanced with learning analytics. Educ. Sci. 2021, 11, 278. [CrossRef]
78. Fiebrink, R.; Gillies, M. Introduction to the special issue on human-centered machine learning. ACM Trans. Interact. Intell. Syst.
(TIIS) 2018, 8, 1–7. [CrossRef]
79. Khanna, M.M. Ungraded pop quizzes: Test-enhanced learning without all the anxiety. Teach. Psychol. 2015, 42, 174–178.
[CrossRef]
80. Kulasegaram, K.; Rangachari, P.K. Beyond “formative”: Assessments to enrich student learning. Adv. Physiol. Educ. 2018,
42, 5–14. [CrossRef] [PubMed]
81. Appiah, M.; Van Tonder, F. E-Assessment in Higher Education: A Review. Int. J. Bus. Manag. Econ. Res. 2018, 9,1454–1460.
82. Ma, M.; Wang, C.; Teng, M.F. Using learning-oriented online assessment to foster students’ feedback literacy in l2 writing
during Covid-19 pandemic: A case of misalignment between micro- and macro-contexts. Asia-Pac. Educ. Res. 2021, 30, 597–609.
[CrossRef]
83. Bowen, L.; Marshall, M.; Murdoch-Eaton, D. Medical student perceptions of feedback and feedback behaviors within the context
of the “educational alliance”. Acad. Med. 2017, 92, 1303–1312. [CrossRef]
84. Aldriye, H.; Alkhalaf, A.; Alkhalaf, M. Automated grading systems for programming assignments: A literature review. Int. J.
Adv. Comput. Sci. Appl. 2019, 10, 215–222. [CrossRef]