0% found this document useful (0 votes)
10 views

The issue_ TECHNOLOGY INTEGRATION

The document discusses three articles focusing on the integration of AI tools in education, particularly in writing feedback for English as a New Language (ENL) students. The first article presents studies showing that AI-generated feedback is comparable to human feedback in effectiveness, while the second article explores educators' perceptions of algorithmically-driven writing tools and their implications for academic integrity. The third article examines the role of ChatGPT as a learning assistant, revealing mixed student opinions on its effectiveness for different writing tasks.

Uploaded by

Siska Melia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

The issue_ TECHNOLOGY INTEGRATION

The document discusses three articles focusing on the integration of AI tools in education, particularly in writing feedback for English as a New Language (ENL) students. The first article presents studies showing that AI-generated feedback is comparable to human feedback in effectiveness, while the second article explores educators' perceptions of algorithmically-driven writing tools and their implications for academic integrity. The third article examines the role of ChatGPT as a learning assistant, revealing mixed student opinions on its effectiveness for different writing tasks.

Uploaded by

Siska Melia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

The issue: TECHNOLOGY INTEGRATION

ARTICLE 1

AUTHORS Juan Escalante1 , Austin Pack1 and Alex Barrett2

TITLES AI-generated feedback on writing: insights into efficacy and ENL


student preference

27 October 2023

ABSTRACT Abstract
The question of how generative AI tools, such as large language
models and chatbots, can be leveraged ethically and effectively in
education is ongoing. Given the critical role that writing plays in
learning and assessment within educational institutions, it
is of growing importance for educators to make thoughtful and
informed decisions as to how and in what capacity generative AI
tools should be leveraged to assist in the development of students’
writing skills. This paper reports on two longitudinal studies. Study
1 examined learning outcomes of 48 university English as a new
language (ENL) learners in a six-week long repeated measures
quasi experimental design where the experimental group received
writing feedback generated from Chat GPT (GPT-4) and the control
group received feedback from their human tutor. Study 2 analyzed
the perceptions of a different group of 43 ENLs who received
feedback from both ChatGPT and their tutor. Results of study 1
showed no difference in learning outcomes between the two
groups. Study 2 results revealed a near even split in preference for
AI-generated or human-generated feedback, with clear advantages
to both forms of feedback apparent from the data. The main
implication of these studies is that the use of AI-generated
feedback can likely be incorporated into ENL essay evaluation
without affecting learning outcomes, although we recommend a
blended
approach that utilizes the strengths of both forms of feedback. The
main contribution of this paper is in addressing generative AI as an
automatic essay evaluator while incorporating learner perspectives.

STATEMENT 1. Does the application of AI-generated feedback result in superior


linguistic progress among ENL students compared to those who
receive feedback from a human tutor?
2. Does the preference for AI-generated feedback surpass that for
human tutor-generated feedback among ENL students?

THEORY & 1.​ As Godwin-Jones (2022) pointed out in his treatise on AWE
CONCEPT tools in second language writing, GPT-powered programs
are capable of not only correcting errors in essays, but can
also compose essays. Given a simple prompt, generative
artificial intelligence (GenAI) LLMs and chatbots that allow
users to interface with LLMs, such as ChatGPT and Bard,
can produce complete essays that are passable at the
university level (Abdel Aal et al., 2022; Herbold et al., 2023).

2.​ From a foundation of second language acquisition


principles, [Ingley, 2023] proposed several practical ways in
which GenAI might be used to improve academic writing in
ENL contexts. For example, they propose questioning
AI-enabled chatbots, and reflecting on output as a way of
generating ideas or better understanding a topic versus
simply asking the AI to brainstorm a topic for you.
Tey also suggest that AI can help by serving in specific roles
(e.g., a conference proposal reviewer, a writing teacher in a
writing conference) and organize writing by drafting outlines
or by providing feedback on a draft’s organization. Similarly,
they propose that feedback on coherence, grammar,
vocabulary, and tone can be asked of these AI-tools to help
support formative essay writing. Through purposeful
prompting, AI-enabled chatbots can act as a more
knowledgeable other (John-Steiner & Mahn, 1996) that can
provide comprehensible input (Krashen, 1982) along
different stages of the writing process. Suggestions such as
these illustrate how instructors

METHOD Quasi experimental design.

RESULT Study 1 compared human tutor and AI-generated feedback to see if


one would influence linguistic gains more than the other. The
results indicate that AI-generated feedback did not result in
superior linguistic progress among ENL students compared to
those who received feedback from a human tutor. The
between-subject variable of group did not have a significant effect
on writing scores, suggesting that one method of feedback was not
better than another in terms of scores.
While the study found that AI-generated feedback did not lead to
superior linguistic progress among ENL students compared to
human tutor feedback, it is important to consider the potential
time-saving benefits offered by AI-generated feedback for
educators. Utilizing AI for providing feedback can potentially
significantly reduce the time teachers spend on reviewing and
responding to each student’s assignment, thereby freeing up
valuable time for other tasks. Furthermore, the time efficiency of
AI-generated feedback can be particularly advantageous in large
classes where providing individualized feedback by the instructor is
logistically challenging and time-consuming.

Study 2 investigated which form of feedback ENL students


preferred and why. We found about half the students preferred
receiving feedback from a human tutor, and half preferred
AI-generated feedback. Tose that preferred sitting down and
discussing their feedback with a tutor cited the face-to-face
interaction as having affective benefits,
such as increasing engagement, as well as benefits for developing
their speaking abilities.
Those that preferred AI-generated feedback primarily cited the
clarity and specificity of the feedback as being useful for improving
their writing. This echoes the findings of Dai et al. (2023), namely
that AI-generated feedback was found to be more readable and
detailed than feedback from an instructor.

CONCLUSION To conclude, while admittedly there are a number of vehicles for


personalized learning, the potential of GenAI in this area merits
further attention. As GenAI continues to be developed and
permeate the sphere of language education, it becomes imperative
to ensure a balanced approach, one that capitalizes on its
strengths while duly recognizing the indispensable contributions of
human pedagogy. Te endeavor of comprehending and assessing
the capabilities of GenAI, along with its potential influence on
language learning and teaching, is arguably now of paramount
importance.

ARTICLE 2

AUTHORS Leah Gustilo1* , Ethel Ong1 and Minie Rose Lapinid1


20 March 2024

TITLES Algorithmically-driven writing and academic integrity: exploring


educators’ practices, perceptions, and policies in AI era

ABSTRACT Background: Despite global interest in the interface of


Algorithmically-driven writing tools (ADWTs) and academic integrity,
empirical data considering educators’ perspectives on the
challenges, benefits, and policies of ADWTs use remain scarce.

Aim: This study responds to calls for empirical investigation


concerning the affordances and encumbrances of ADWTs, and
their implications for academic integrity.

Methods: Using a cross-sectional survey research design, we


recruited through snowball sampling 100 graduate students and
faculty members representing ten disciplines. Participants
completed an online survey on perceptions, practices, and policies
in the utilization of ADWTs in education. The Technology
Acceptance Model (TAM) helped us understand the factors
influencing the acceptance and use of ADWTs.

Results: The study found that teacher respondents highly value the
diverse ways ADWTs can support their educational goals
(perceived usefulness). However, they must overcome their barrier
threshold such as limited access to these tools (perception of
external control), a perceived lack of knowledge on their use
(computer self-efficacy), and concerns about ADWTs’ impact on
academic integrity, creativity, and more (output quality).

Conclusion: AI technologies are making headway in more


educational institutions because of their proven and potential
benefits for teaching, learning, assessment, and research.
However, AI in education, particularly ADWTs, demands critical
awareness of ethical protocols and entails collaboration and
empowerment of all stakeholders by introducing innovations that
showcase human intelligence over AI or partnership with AI.

Keywords: ChatGPT, Academic integrity, Digital writing tools,


Academic misconduct,
Algorithmic writing
STATEMENT 1. What are the perceptions of educators regarding the use of
these tools and their implications for academic integrity?
2. What are the current practices of educators in using ChatGPT,
DWAs, MTs, and APTs?
3. What are the policies provided by their institutions and their own
suggested policies for the meaningful and ethical use of these
tools?

THEORY & As the number of students and researchers who use AI-powered
CONCEPT technologies to complete their tasks continues to grow, studies
have reported mixed findings and recommendations regarding the
utilization of AI-powered technologies in the academe (Adiguzel et
al. 2023; Cassidy 2023). Research investigating the ethical use of
AI in teaching, learning, and assessment cautioned that these
technologies may lead to cheating and fraud and compromise our
core human values of honesty and
integrity (Cotton et al. 2024; Dehouche 2021; Eaton 2021).
Previous studies have also called for formulating and revising
educational policies and guidelines to prevent and detect academic
misconduct in submitted work and propose alternative
assessments that minimize the use of AI-powered technologies
(Cassidy 2023; Lim et al.2023).
An analysis of 142 academic integrity policies of higher education
institutions related to the use of ADWTs revealed a gap in explicitly
mentioning “AI”, according to Perkins and Roe (2023). This
underscores the call for revisiting and revising relevant policies and
regulations to explicitly mention the implications of ADWTs to
learning while emphasizing the proper and ethical use of these
technologies. Chan (2023) described an AI educational policy
framework that was derived from examining the perceptions among
the teaching staff in Hong Kong universities regarding the
integration of generative AI technologies in education.
The resulting framework contains three (3) dimensions:
pedagogical, governance, and operational; and it requires strong
collaboration among key stakeholders that include institutional
leaders and administrators, teaching staff, and students for its
successful implementation. The pedagogical dimension
encourages educators to adopt AI technologies to equip students
for an AI-driven workplace. Technology use, however, should be
tied to pedagogical practices and learning theories when designing
instructional materials and learning activities (Adiguzel et al. 2023).
The governance dimension urges institutional leaders to attend to
ethical concerns through policies that promote accountability and
the responsible use of AI that center on human well-being and
values (Dignum 2019). The operational dimension acknowledges
the need for train-
ing, support, and monitoring of appropriate AI technologies.

METHOD Survey research design

RESULT Results: The study found that teacher respondents highly value the
diverse ways ADWTs can support their educational goals
(perceived usefulness). However, they must overcome their barrier
threshold such as limited access to these tools (perception of
external control), a perceived lack of knowledge on their use
(computer self-efficacy), and concerns about ADWTs’ impact on
academic integrity, creativity, and more (output quality).

CONCLUSION Conclusion: AI technologies are making headway in more


educational institutions because of their proven and potential
benefits for teaching, learning, assessment, and research.
However, AI in education, particularly ADWTs, demands critical
awareness of ethical protocols and entails collaboration and
empowerment of all stakeholders by introducing innovations that
showcase human intelligence over AI or partnership with AI.

ARTICLE 3
TITLES Cultivating writing skills: the role of ChatGPT
as a learning assistant—a case study

18 March 2024

AUTHORS Nermin Punar Özçelik1* and Gonca Yangın Ekşi2

ABSTRACT Artificial intelligence (AI) has garnered considerable interest in the


field of language education in recent times; however, limited
research has focused on the role of AI in the specific context of
register knowledge learning during English language writing.

This study aims to address this research gap by examining the


impact of ChatGPT, an AI-powered chatbot, on the acquisition of
register knowledge across various writing tasks. The research
design employed a one-case shot pre-experimental design, with 11
voluntary participants selected through convenience sampling.
Preliminary results indicate that students found ChatGPT beneficial
for acquiring formal register knowledge but perceived it as
unnecessary for informal writing. Additionally, the effectiveness of
ChatGPT in teaching neutral registers was questioned by the
participants.

This research contributes to the existing literature by shedding new


light on the effects of AI-generated chatbots in register learning
during the writing process, offering insights into their potential as
learning assistants. Further investigation is warranted to explore the
broader implications and applications of AI in language learning
contexts.

STATEMENT 1.​ Can the use of ChatGPT as a learning assistant help


students improve self‐editing their writing?
2.​ What are the students’ opinions and suggestions regarding
using ChatGPT as a learning assistant?

THEORY & 1.​ As technology develops, new tools have been emerging.
CONCEPT One of these innovative anD modern technologies is
Artificial Intelligence (AI). AI is a term defined in different
ways. McCarthy (2007) came up with one of AI’s first and
most significant definitions: “the science and engineering of
making intelligent machines”. Shneiderman (2020) claims
that AI is a type of system that can be automated using
technologies like machine learning, neural nets, and
statistical methods. According to him, these systems can
help us do things faster and more accurately than we could.
Artificial intelligence exists with us in many areas of life,
even if we are unaware of its existence.

2.​ Chatbot is of the innovative technologies of AI, and it refers


to an artificially-intelligent computer program that can
perform audio or text conversations (Haristiani, 2019). Many
information-focused websites and messaging programs
(e.g., universities, libraries, and
museums) currently have online chatbots (Fryer et al.,
2020). Fryer et al. (2020) claim that chatbots are not new;
instead, they have existed for decades. In the early 2000s,
Coniam (2004) evaluated two chatbots as potential
language-learning companions. One of them was Dave,
developed by the ALICE Artificial Intelligence Foundation,
which was portrayed as an ideal personal tutor (Coniam,
2004, p. 160).

3.​ The use of Chat GPT as a learning assistant can be


associated with some learning theories, such as
constructivism, social constructivism, cognitive load theory,
and information processing theory. According to
constructivist learning theory, learners can construct their
own learning by actively engaging with new information and
building on their knowledge (Bruner, 1996). By offering
individualized feedback and suggestions appropriate to the
learners’ needs and former knowledge, Chat GPT can
perform as an assistant in the learning process. In terms of
social constructivist learning, the role of social interaction
and collaboration is of great importance (Vygotsky, 1980).
Chat GPT can promote social interaction by offering a
conversational interface in which learners are able to
collaborate within a natural environment. On the other hand,
cognitive load theory claims that, due to the limited capacity
of cognition, the learning materials should be eligible to
balance the cognitive loads of learners (Atkinson & Shifrin,
1968). Therefore, As a combination of providing feedback
and conversational interface, Chat GPT might be a
beneficial tool to reduce the unneeded cognitive load while
increasing the necessary cognitive load to enhance effective
learning, as claimed in cognitive load theory. Lastly,
information processing theory identifies several phases of
learning to become intake, which is the final learned version
of knowledge (Simon, 1978). Chat GPT can aid learners in
this process of learning by offering feedback and guidance
that corresponds to each learner’s strengths and
weaknesses and learning style.

METHOD A case study

RESULT 1.​ Students showed interest and engagement in using


ChatGPT as a learning assistant for writing tasks. They tried
to actively participate and write their tasks with the help of
ChatGPT. Students also found it beneficial to continue the
conversation with ChatGPT to ask for more examples and
clarification, indicating a desire for improvement.
2.​ The results under the opinion theme showed that the
students had mixed opinions about using ChatGPT for
formal and informal texts. ChatGPT was perceived as more
beneficial for formal text corrections. It was criticized for
making changes without pro-
viding explanations. Students felt that ChatGPT focused
more on grammar and punctuation than vocabulary.
Therefore, even if some of them were keen to use ChatGPT
only for their formal writings, others did not want to use
ChatGPT for their, especially,
informal writings.

CONCLUSION The results of the study showed that ChatGPT has the potential to
assist students in developing their writing abilities, particularly in
formal registers. Students used ChatGPT enthusiastically and
actively for their writing tasks. They benefited from its suggestions
and corrections to enhance the formal aspects of their writings. The
study noted several difficulties, though, including technical
difficulties and limitations in interpreting informal and neutral
registers. Despite these drawbacks, students’ varied viewpoints and
insightful ideas emphasized the need for significant functional
improvements to ChatGPT to make it a more useful learning tool for
self-editing. ChatGPT can provide significant assistance to students
in their writing tasks with careful evaluation and modifications.

ARTICLE 4

AUTHORS Chaoran Wang

TITLES Exploring Students’ Generative AI‐Assisted Writing Processes:


Perceptions and Experiences from Native and Nonnative
English Speakers

14 May 2024

ABSTRACT Abstract
Generative artificial intelligence (AI) can create sophisticated
textual and multimodal content readily available to students. Writing
intensive courses and disciplines that use writing as a major form of
assessment are significantly impacted by advancements in
generative AI, as the technology has the potential to revolutionize
how students write and how they perceive writing as a fundamental
literacy skill. However, educators are still at the beginning stage of
understanding students’ integration of generative AI in their actual
writing process. This study addresses the urgent need to uncover
how students engage with Chat GPT throughout different
components of their writing processes and their perceptions of the
opportunities and challenges of generative AI. Adopting a
phenomenological research design, the study explored the writing
practices of six students, including both native and nonnative
English speakers, in a first-year writing class at a higher education
institution in the US. Thematic analysis of students’ written
products, self-reflections, and interviews suggests that students
utilized ChatGPT for brainstorming and organizing ideas as well as
assisting with both global (e.g., argument, structure, coherence)
and local issues of writing (e.g., syntax, diction, grammar), while
they also had various ethical and practical concerns about the use
of ChatGPT. The study brought to front two dilemmas encountered
by students in their generative AI-assisted writing: (1) the
challenging balance between incorporating AI to enhance writing
and maintaining their authentic voice, and (2) the dilemma of
weighing the potential loss of learning experiences against the
emergence of new learning opportunities accompanying AI
integration. These dilemmas highlight the need to rethink learning
in an increasingly AI-mediated educational context, emphasizing
the importance of fostering students’ critical AI literacy to promote
their authorial voice and learning in AI-human collaboration.

STATEMENT 1. How do students utilize ChatGPT in their writing processes?


2. How do student writers perceive the benefits of integrating
ChatGPT into their writing?
3. What concerns and limitations do students experience when
using ChatGPT to assist with their writing?
4. What considerations do students identify as important when
engaging in generative AI-assisted writing?

THEORY & 1.​ Researchers have long been studying the utilization of AI
CONCEPT technologies to support writing and language learning
(Schulze, 2008). Three major technological innovations
have revolutionized writing: (1) word processors, which
represented the first major shift from manual to digital
writing, replacing traditional typewriters and manual editing
processes; (2) the Internet, which introduced web-based
platforms, largely promoting the communication and
interactivity of writing; and (3) natural language processing
(NLP) and artificial intelligence, bringing about tools capable
of real-time feedback and content and thinking assistance
(Kruse et al., 2023). These technologies have changed
writing from a traditionally manual and individual activity into
a highly digital nature, radically transforming the writing
processes, writers’ behaviors, and the teaching of writing.
This evolution reflects a broader need towards a
technologically sophisticated approach to writing instruction.
2.​ Research suggests that adopting AI in literacy and
language education has advantages such as supporting
personalized learning experiences, providing differentiated
and immediate feedback (Huang et al., 2022; Bahari, 2021),
and reducing students’ cognitive barriers (Gayed et al.,
2022). Researchers also note challenges such as the varied
level of technological readiness among teachers and
students as well as concerns regarding accuracy, biases,
accountability, transparency, and ethics (e.g., Kohnke et al.,
2023; Memarian & Doleck, 2023; Ranalli, 2021).
3.​ With sophisticated and multilingual language generation
capabilities, the latest advancements of generative AI and
large language models, such as ChatGPT, unlock new
possibilities and challenges. Scholars have discussed how
generative AI can be used in writing classrooms. Tseng and
Warschauer (2023) point out that ChatGPT and AI-writing
tools may rob language learners of essential learning
experiences; however, if banning them, students will also
lose essential opportunities to learn how to use AI in
supporting their learning and their future work. They
suggest that educators should not try to “beat” but rather
“join” and “partner with” AI (p. 1). Barrot (2023a) and Su et
al. (2023) both review ChatGPT’s benefits and challenges
for writing, pointing out that ChatGPT can offer a wide range
of context-specific writing assistance such as idea
generation, outlining, content improvement, organization,
editing, proofreading, and post-writing reflection. Similar to
Tseng and Warschauer (2023), Barrot (2023a) is also
concerned about students’ learning loss due to their use of
generative AI in writing and their over-reliance on AI.
Moreover, Su et al. (2023) specifically raise concerns about
the issues of authorship and plagiarism, as well as
ChatGPT’s shortcomings in logical reasoning and
information accuracy.

METHOD Ethnography research

RESULT 1.​ The students reported using ChatGPT throughout different


components of writing their argumentative essays including
(1) brainstorming, (2) outlining, (3) revising, and (4) editing.
2.​ Utilizing ChatGPT in their various writing process
components, the students reported that ChatGPT had the
following benefits: (1) accelerating their writing process, (2)
easing their cognitive load, (3) fostering new learning
opportunities, (4) getting immediate feedback, and (5)
promoting positive feelings about writing.
3.​ Despite the benefits and usefulness of ChatGPT for
assisting with students’ writing, they also expressed many
reservations and limitations regarding the AI tool. The first
concern was about the false information it produced and its
potential to mislead people. The students commented that
ChatGPT tended to “make up information” (Emma), “make
assumptions and guesses” (Su), and generate “inaccurate
information” (Nora), “wrong information” (Alex), and
“nonsense” (Lydia).
4.​ Presented with these limitations of ChatGPT, the students
shared some important aspects they think should be
considered when incorporating AI into writing, summarized
as follows: (1) balanced and moderate use of AI, (2) critical
use of AI, (3) ethical considerations, (4) the need for human
voice, (5) the importance of authenticity, (6) seizing AI as a
learning opportunity, and (7) transparency from and
conversation between teachers and students.

CONCLUSION This study explored students’ generative AI-assisted writing


processes in a first-year writing class in an American college. The
study found that students utilized generative AI for assisting with
both global (e.g., argument, structure, coherence) and local issues
of writing
(e.g., syntax, diction, grammar), while they also had various ethical
and practical concerns about the use of AI. Findings showed that
large language models offer unique benefits for L2 writers to
leverage its linguistic capabilities. The study highlights the urgency
of explicit teaching of critical AI literacy and the value of
(post)process-oriented writing pedagogy (e.g., Graham, 2023) in
college writing classrooms so that students not only under-
stand AI writing tools’ functions and limitations but also know how
to utilize and evaluate them for specific communication and
learning purposes.

ARTICLE 5

TITLES Feedback sources in essay writing: peer generated or AI-generated


feedback?

AUTHORS Seyyed Kazem Banihashem, Nafiseh Taghizadeh Kerman, Omid


Noroozi, Jewoong Moon And Hendrik Drachsler

12 April 2024

ABSTRACT Peer feedback is introduced as an effective learning strategy,


especially in large size classes where teachers face high
workloads. However, for complex tasks such as writing an
argumentative essay, without support peers may not provide
high-quality feedback since it requires a high level of cognitive
processing, critical thinking skills, and a deep understanding of the
subject. With the promising developments in Artificial Intelligence
(AI), particularly after the emergence of ChatGPT, there is a global
argument that whether AI tools can be seen as a new source of
feedback or not for complex tasks. The answer to this question is
not completely clear yet as there are limited studies and our
understanding remains constrained. In this
study, we used ChatGPT as a source of feedback for students’
argumentative essay writing tasks and we compared the quality of
ChatGPT-generated feedback with peer feedback. The participant
pool consisted of 74 graduate students from a
Dutch university. The study unfolded in two phases: firstly, students’
essay data were collected as they composed essays on one of the
given topics; subsequently, peer feedback and ChatGPT-generated
feedback data were collected through engaging peers in a
feedback process and using ChatGPT as a feedback source. Two
coding schemes including coding schemes for essay analysis and
coding schemes for feedback analysis were used to measure the
quality of essays and feedback. Then, a MANOVA analysis was
employed to determine any distinctions between the feedback
generated by peers and ChatGPT. Additionally, Spearman’s
correlation was utilized to explore potential links between the essay
quality and the feedback generated by peers and ChatGPT. The
results showed a significant difference between feedback
generated by ChatGPT and peers. While ChatGPT provided more
descriptive feedback including information about how the essay is
written, peers provided feedback including information about
identification of the problem in the essay. The overarching look at
the results suggests a potential complementary role for ChatGPT
and students in the feedback process. Regarding the relationship
between the quality of essays and the quality of the feedback
provided by ChatGPT and peers, we found no overall significant
relationship. These findings imply that the quality of the essays
does not impact both ChatGPT and peer feedback quality. The
implications of this study are valuable, shedding light on the
prospective use of ChatGPT as a feedback source, particularly for
complex tasks like argumentative essay writing.

STATEMENT RQ1. To what extent does the quality of peer-generated and


ChatGPT-generated feedback differ in the context of essay writing?

RQ2. Does a relationship exist between the quality of essay writing


performance and the quality of feedback generated by peers and
ChatGPT?

THEORY & 1.​ Feedback is acknowledged as one of the most crucial tools
CONCEPT for enhancing learning (Banihashem et al., 2022). The
general and well-accepted definition of feedback
conceptualizes it as information provided by an agent (e.g.,
teacher, peer, self, AI, technology) regarding aspects of
one’s performance or understanding (e.g., Hattie &
Timplerely, 2007). Feedback serves to heighten students’
self-awareness concerning their strengths and areas
warranting improvement, through providing actionable steps
required to enhance performance (Ramson, 2003). The
literature abounds with numerous studies that illuminate the
positive impact of feedback on diverse dimensions of
students’ learning journey including increasing motivation
(Amiryousefi & Geld, 2021), fostering active engagement
(Zhang & Hyland, 2022), promoting self-regulation and
metacognitive skills (Callender et al., 2016; Labuhn et al.,
2010)

2.​ Feedback is acknowledged as one of the most crucial tools


for enhancing learning (Banihashem et al., 2022). The
general and well-accepted definition of feedback
conceptualizes it as information provided by an agent (e.g.,
teacher, peer, self, AI, technology) regarding aspects of
one’s performance or understanding (e.g., Hattie &
Timplerely, 2007). Feedback serves to heighten students’
self-awareness concerning their strengths and areas
warranting improvement, through providing actionable steps
required to enhance performance (Ramson, 2003). The
literature abounds with numerous studies that illuminate the
positive impact of feedback on diverse dimensions of
students’ learning journey including increasing motivation
(Amiryousefi & Geld, 2021), fostering active engagement
(Zhang & Hyland, 2022), promoting self-regulation and
metacognitive skills (Callender et al., 2016; Labuhn et al.,
2010), and

METHOD Mixed-method research, combined experimental & comparative


analysis

RESULT 1.​ The results showed a significant difference in feedback


quality between peer feedback and ChatGPT-generated
feedback. Peers provided feedback of higher quality
compared to ChatGPT. This difference was mainly due to
the descriptive and identification of the problem features of
feedback. ChatGPT tended to produce more extensive
descriptive feedback including a summary statement such
as the description of the essay or taken action, while
students performed better in pinpointing and identifying the
issues in the feedback provided

2.​ Overall, the results indicated that there was no significant


relationship between the quality of essay writing and the
feedback generated by peers and ChatGPT. However, a
positive correlation was observed between the quality of the
essay and the affective feature of feedback generated by
ChatGPT, while a negative relationship was observed
between the quality of the essay and the affective feature of
feedback generated by peers. This finding means that as
the quality of the essay improves, ChatGPT tends to
provide more effective feedback, while peers tend to provide
less effective feedback

CONCLUSION This study contributes to and adds value to the young existing
but rapidly growing literature in two distinct ways. From a
research perspective, this study addresses a significant void in
the current literature by responding to the lack of research on
AI-generated feedback for complex tasks like essay writing in
higher education. The research bridges this gap by analyzing
the effectiveness of ChatGPT-generated feedback compared to
peer generated feedback, thereby establishing a foundation for
further exploration in this field. From a practical perspective of
higher education, the study’s findings offer insights into the
potential integration of ChatGPT as a feedback source within
higher education contexts. The discovery that ChatGPT’s
feedback quality could potentially complement peer feedback
highlights its applicability for enhancing feedback practices in
higher education. This holds particular promise for courses with
substantial enrolments and essay-writing components,
providing teachers with a feasible alternative for delivering
constructive feedback to a larger number of students.

ARTICLE 6

Title Impact of ChatGPT on ESL students’ academic writing skills: a


mixed methods intervention study

13 Feb 2024

AUTHORS Santosh Mahapatra

ABSTRACT This paper presents a study on the impact of ChatGPT as a


formative feedback tool on the writing skills of undergraduate ESL
students. Since artificial intelligence-driven automated writing
evaluation tools positively impact students’ writing, ChatGPT, a
generative artificial intelligence-propelled tool, can be expected to
have a more substantial positive impact. However, very little
empirical evidence regarding the impact of ChatGPT on writing is
available. The current mixed methods intervention study tried to
address this gap. Data were collected from tertiary level ESL
students through three tests and as many focus group
discussions. The findings indicate a significant positive impact of
ChatGPT on students’ academic writing skills, and students’
perceptions of the impact were also overwhelmingly positive. The
study strengthens and advances theories of feedback as a
dialogic tool and ChatGPT as a reliable writing tool, and has
practical implications. With proper student training, ChatGPT can
be
a good feedback tool in large-size writing classes. Future
researchers can investigate the impact of ChatGPT on various
specific genres and micro aspects of writing.

STATEMENT 1.​ In an intensive academic writing course, when the


instructional hours and tasks are held constant, does the
employment of ChatGPT as a feedback tool have any
significant impact on undergraduate ESL students’ writing
skills?
2.​ How do the experimental group students perceive the
impact of ChatGPT as a feedback tool on their writing
skills?

THEORY & 1.​ The utility and positive impact of formative feedback in the
CONCEPT writing classroom are well established (Olsen & Huns,
2023). When operationalized in the form of self-and peer
assessment (SA and PA), formative feedback leads to
reflection, self-regulation, self monitoring, and revision on
the parts of students (Lam, 2018). SA and PA can be used
to augment learning in large-size writing classrooms often
encountered in developing and under-developed countries
in the Global South (Fathi & Khodabakhsh, 2019; Mathur &
Mahapatra, 2022). However, research on feedback in large
writing classes is limited (Rodrigues et al., 2022). It has
been reported that smarter techniques must replace
traditional ways to offer personalized dialogic feedback to
students (Kohnke et al., 2023). With the proliferation of
AI-driven tools such as Grammarly, QuillBot, Copy.ai,
Word-Tune, ChatGPT, and others, it has become easier for
students to obtain feedback on their writing (Marzuki et al.,
2023; Zhao, 2022). They have advanced automated writing
evaluation (AWE) and feedback in writing (Gayed et al.,
2022).
2.​ Like earlier AI chatbots, ChatGPT can be used for
generating ideas and brainstorming) (Lingard, 2023).
Recently, it has been accepted that ChatGPT can make
writing easier and faster (Stokel-Walker, 2022). Tis potential,
when exploited by teachers, can be converted into a
dependable feedback tool. Wang and Guo (2023) discuss
ChatGPT supporting students with learning grammar and
vocabulary. As pointed out by Rudolph et al. (2023),
irrespective of students’ ability to use language accurately
to ask questions, ChatGPT can provide feedback and
information. In another study by Dai et al. (2023), students
received corrective feedback from ChatGPT. Mizumoto and
Eguchi (2023) also highlight similar findings when they tried
ChatGPT as an AWE tool. In a study conducted in Saudi
Arabia, Ali et al. (2023) discuss the positive impact of its use
on learners’ motivation. It could be due to its ability to
provide reliable explanations (Kohnke et al., 2023) without
the student having to go through the anxiety of asking the
query in a classroom (Su et al., 2023). Since coming into
existence in the last part of 2022 (OpenAI, 2022), ChatGPT
has gained immense popularity among language educators.
It has been reported as capable of producing high-quality
texts (Gao et al., 2022), offering feedback on text
organization, language use and recommending corrections
(Ohio University, 2023), logically organizing content, adding
appropriate supporting details and conclusion (Fitria, 2023).
While Yan (2023) has reported benefits to students’ writing
skills through its use, he has also warned that its use can
threaten academic honesty and ethicality in writing.

METHOD Mixed Method Intervention Design

RESULT 1.​ Positive impact of ChatGPT as a feedback tool

The impact of ChatGPT as a feedback tool on


students’ writing skills was positive and significant.
The differences among the EG mean scores for the
pre-test, post-test, and delayed post-test (see Table
2) indicate the trajectory of improvement in students’
writing skills.

2.​ Students’ positive perception of the impact


The students seemed to be happy about how
ChatGPT helped them generate ideas and focused
information on the given topic and work
independently.
They also highlighted how it promoted collaboration
among peers, facilitated faster
task completion, helped them create strong topic
sentences and reduced brainstorming
time.

CONCLUSION The study establishes the potential of ChatGPT as a pedagogic tool


for writing classrooms, especially in many Global South countries
where students have access to portable computing devices and the
Internet. It can be easily integrated into the regular teaching of
academic writing skills in institutions of higher education. A major
factor that needs mentioning is the teacher’s attitude towards
ChatGPT and their ability to use it in a constructive manner in a
large size classroom. The latter one includes self-and student-
training. It is true that many webinars and workshops on the use of
ChatGPT are being conducted for teachers working in Global South
countries like India. However, without
proper reflective planning and an analysis of the need for shunning
traditional feedback strategies, the use of ChatGPT may not be as
impactful. Tus, teacher education pro-
grams need to orient teachers towards utilizing ChatGPT in their
writing classrooms.

ARTICLE 7
TITLE Improving Writing Feedback for Struggling Writers: Generative AI
to the Rescue?

19 April 2024

AUTHORS Anya S. Evmenova1. Kelley Regan1 · Reagan Mergen1· Roba


Hrisseh1

ABSTRACT Generative AI has the potential to support teachers with writing


instruction and feedback. The purpose of this study was to explore
and compare feedback and data-based instructional suggestions
from teachers and those generated by different AI tools. Essays
from students with and without disabilities who struggled with
writing and needed a technology-based writing intervention were
analyzed. The essays were imported into two versions of ChatGPT
using four different prompts, whereby eight sets of responses were
generated. Inductive thematic analysis was used to explore the
data sets. Findings indicated: (a) differences in responses between
ChatGPT versions and prompts, (b) AI feedback on student writing
did not reflect provided student characteristics (e.g., grade level or
needs; disability; ELL status), and (c) ChatGPT’s responses to the
essays aligned with teachers’ identified areas of needs and
instructional decisions to some degree. Suggestions for increasing
educator engagement with AI to enhance teaching writing is
discussed.

STATEMENT RQ1: What is the difference between responses generated by


GPT-3.5 and GPT-4 given prompts which provide varying
specificity about students’ essays?

RQ2: What is the nature of the instructional suggestions provided


by ChatGPT for students with and without disabilities and/or ELLs
(aka struggling writers)?

RQ3: How does the formative feedback provided by GPT-3.5 and


GPT-4 compare to the feedback provided by teachers when given
the same rubric?

THEORY & 1.​ The advances in Generative Artificial Intelligence


CONCEPT (generative AI) have transformed the field of education
introducing new ways to teach and learn. Its integration is
fast growing in all areas of education, including special
education (Marino et al., 2023). Generative AI has the
potential to increase the inclusion of students with
disabilities in general education by providing additional
assistive supports (Garg and Sharma, 2020; Zdravkova,
2022). Specifically, large language models like the one
used by a popular AI tool, ChatGPT (Chat Generative
Pre-trained Transformer) can generate human-like
responses to prompts, similar to a conversation. It can
facilitate learning for students with and without
high-incidence disabilities (e.g., learning disabilities,
ADHD) who struggle with writing (Barbetta, 2023). While
experts continue to investigate the future of writing in the
ChatGPT era, it is evident that it will significantly alter
writing instruction (Wilson, 2023). ChatGPT can support
students in choosing a topic, brainstorming, outlining,
drafting, soliciting feedback, revising, and proofreading
(Trust et al., 2023). This tool may also be a helpful
resource for teachers in providing feedback on students’
writing. Timely and quality feedback by ChatGPT can
encourage the use of higher-level thinking skills while
improving the writing process including the planning,
writing, and reviewing phases of that process (Golinkoff &
Wilson, 2023).

The writing process may be challenging for some students


for many reasons. For example, planning is the first step of writing,
but many students don’t systematically brainstorm. Instead, they
move directly into drafting their sentences which may, in turn, be
disjointed and not effectively communicated (Evmenova & Regan,
2019). Students, particularly those with high-incidence disabilities
may not produce text or compose limited text, struggling with
content generation, vocabulary, and the organization of ideas
(Chung et al., 2020). While multilingualism is an asset, we have
observed similar challenges with writing among English Language
Learners in our research (Hutchison et al., 2024). The cognitive
demands needed for drafting a response leave many students at
no capacity to then edit or revise their work (Graham et al., 2017).
Therefore, teachers should provide scaffolds to break down the
complex process of writing so that it is sequential and
manageable, progressing from simple to more complex concepts
and skills.

METHOD Comparative Analysis

RESULT 1.​ In effort to answer RQ1 and explore the differences


between responses generated by different ChatGPT
versions when given prompts with varying specificity, we
analyzed eight sets of responses. While the purpose was
not to compare the sets in effort to find which one is better,
several patterns have been observed that can guide
teachers in using ChatGPT as the starting point for
generating writing feedback to their struggling writers.

2.​ Instructional suggestions based on the evaluation of


student writing was a focus of RQ2. Although we expected
the responses from prompts that included specific student
characteristics to differentiate the instructional suggestions
in some way, this was not the case. In fact, none of the sets
provided explicit instructional suggestions aligned with
students’ characteristics (e.g., grade, disability, ELL).
First, the suggestions for improving the writing of a 3rd
grader’s essay were not distinct from those suggestions
provided in response to a 7th grader’s writing (in Generic
Rating GPT-3.5 and No Rubric GPT-3.5 sets). Also, there
were no remarkable differences in the vocabulary used in
the feedback for a 3rd grader vs. a 7th grader (in Generic
Rating GPT-4 set). Only one set (Generic Rating GPT-4)
offered a personalized message in a student-friendly format
(without any additional prompting to do so).

Second, student characteristics were merely


acknowl-edged in some sets. For example, Specific
Analytic Rubric GPT-3.5 and GPT-4 only noted those
characteristics in the summary section at the end of the
feedback (e.g., “This is a well-written persuasive essay by
your 7th-grade students with ADHD”). This was also
observed

CONCLUSION This study offers examples for how to potentially incorporate AI


effectively and efficiently into writing instruction. High quality
special education teachers are reflective about their practice, use
a variety of assistive and instructional technologies to promote
student learning, and regularly monitor student progress with
individualized assessment strategies.
It seems very likely that teachers will adopt the capabilities of
generative AI tools. With ongoing development and
enhancements, AI technology is certain to become an integral
component of classroom instruction. However, given the limitations
of ChatGPT identified in this study, teacher led instruction and
decision making is still needed to personalize and individualize
specialized instruction. Engaging with the technology more and
building familiarity of what it can do to improve student learning
and teacher practice is warranted.

ARTICLE 8

TITLE Learner interaction with, and response to, AI‐programmed


automated writing evaluation feedback in EFL writing: An
exploratory study

27 June 2023

AUTHORS Hongzhi Yang1 · Chuan Gao1. Hui‐zhong Shen1

ABSTRACT Recently, artificial intelligence (AI)-programmed automated writing


evaluation (AWE) has attracted increasing attention in language
research. Using a small data set arising from an analysis of fve
Chinese university-level English as a foreign language (EFL)
students’ submissions, this paper examined in detail how EFL
students interacted with the feedback of Pigai, the largest
AI-programmed AWE in China. The analysis started with the
intention of capturing the machine feedback on the five students’
submissions and the exchanges between the participants and Pigai
over repeated submissions, ranging from 3 to 12 submissions. The
analysis showed that the learners’ interactions with Pigai focused
on error corrective feedback in the initial two submissions. In the
case of one student who had 12 submissions, the non-error
corrective feedback increased gradually over time, providing rich
linguistic resources but without examples and contextual
information. The students’ take-up rates of feedback with linguistic
resources were much lower than that of error corrective and
general feedback. A terrain model to map the stages and nature of
student responses showed a more complete dynamic process, in
which students’ responses changed from the initial mechanical
responses at the discrete language level to more considered
approaches in response to machine feedback. The findings of this
study have implications for both language pedagogy and the future
design and development of AWE for second or foreign language
learning.

STATEMENT 1. What are the salient features of iterative feedback by Pigai over
a period of multiple resubmissions?

2. How do learners interact with Pigai feedback through the


process of multiple revisions?

THEORY & 1.​ In the EFL context, some research revealed that teachers’
CONCEPT WCF mainly focused on written accuracy, with more direct
error feedback (Lee, 2011; Waer, 2021). At the same time,
teacher WCF has been gradually replaced by the increasing
use of automated writing evaluation programs. It was
regarded as unrealistic to implement teacher WCF in large
EFL classrooms in China (Yu et al., 2020; Zhang & Hyland,
2018). Therefore, to explore “feedback up (what the student
can do better in the same task?)” (Chong, 2018, p. 342),
our research explored the process and pattern of Chinese
EFL students’ engagement with AWE via multiple
submissions to complete one writing task.
2.​ Automated writing evaluation (AWE) is a machine learning
system that provides learners feedback on spelling,
punctuation, grammar, sentences, and coherence (Zhang &
Hyland, 2018). However, some aspects of writing, such as
writing style, creativeness, and conceptual ideas, cannot be
evaluated by AWE (Stevenson & Phakiti, 2014, 2019) and
there are limited genre types that can be marked by AWE,
apart from the narrative and argumentative text type
(Stevenson & Phakiti, 2014). Research on AWE has largely
focused on writing products, with little attention paid
to the revision process (Stevenson & Phakiti, 2014; Storch,
2018).

METHOD Case Study

RESULT 1.​ Regarding the first research question about the salient
features of Pigai feedback, this study provided detailed
information regarding the patterns of Pigai feedback for the
five participants’ submissions. As shown in Table 2, 73% of
the feedback items were non-error feedback and the
error-corrective feedback focused on capital letter,
vocabulary, grammar and punctuation errors. These
covered most of the error categories listed by Han and
Hyland (2019). The analysis of Pigai feedback types and
error categories showed that although Pigai feedback varied
across types, the majority focused on language-related
errors, such as mechanics, grammar, and lexical errors.
These direct error corrective feedback items reduced after
the submission 2 for all five students and remained low until
submission 8 for student 5. The general feedback focused
on essay organization but lacked a more comprehensive
view of writing, in areas such as idea and content (Huang &
Renandya, 2020).

2.​ Regarding the second research question, all five


participants demonstrated sustained engagement with Pigai
feedback via multiple revisions and submissions. Student 5
was particularly highly engaged in responding to Pigai
feedback, evidenced by the process and patterns of the
revision. This differed from some research that reported the
lack of revision or re-drafting after receiving feedback (El
Ebiary & Windeatt, 2010; Koltovskaia, 2020). In addition, all
students had different revisions based on different feedback
sources. All five students corrected the errors identified by
the error corrective feedback quickly in the first two
submissions. This was evidenced by the high take-up rate
of error-corrective feedback and the decrease of this type of
feedback after the second submission. This was in line with
the finding that error corrective feedback leads to
improvement in students’ writing, especially in terms of error
rate reduction (Liao, 2016; Wang et al, 2013). This may
have been because it was easy for the students to make
superficial mechanical revisions, but more trouble some to
make revisions of collocations and synonymous words (Bai
& Hu, 2017).

CONCLUSION Our detailed analysis showed that Pigai provided various types of
feedback. The majority of error corrective feedback focused on
local, language-related errors. At the same time, Pigai feedback
provided a significant amount of non-error feedback items,
increasing through multiple submissions but lacking examples and
contextual information. The detailed analysis of all five students’
revisions and resubmissions
showed certain patterns via their responses to different types of
feedback, initially with error-corrective feedback, then with
non-error corrective feedback with trial and errors, and general
feedback. It could be concluded that sustained engagement with
Pigai feedback could facilitate writing improvement and develop
students’ autonomy in using various writing strategies. These
findings contribute to the literature on students’ engagement with
AWE which until now has lacked evidence on how students uptake
AWE feedback (Bai & Hu, 2017; Lu, 2019).

ARTICLE 9

TITLES The effects of automatic writing evaluation and teacher-focused


feedback on CALF measures and overall quality of L2 writing
across different genres

AUTHORS Zahra Fakher Ajabshir and Saman Ebadi

ABSTRACT This study investigates the effects of teacher-focused feedback


(TF) and automatic writing evaluation (AWE) on global writing
performance as well as syntactic complexity, accuracy, lexical
diversity, and fluency (CALF) of English as a foreign language
(EFL) learners’ narrative and argumentative writings. The
participants were randomly assigned to TF and AWE groups.
During the treatment, the teacher delivered instruction on the
narrative and argumentative genres, followed by the participants’
engagement in writing texts and getting feedback either from the
teacher or AWE. The results revealed improvements in overall
writing performance (formal aspects) as well as CALF
measures. While there was no significant difference between the
two groups in their overall writing performance, AWE yielded better
scores in lexical diversity and syntactic complexity, and the TF
group outperformed in fluency. Moreover, an interaction was found
between feedback types (TF vs. AWE) and text genres in CALF
measures. The
narrative writings were characterized by higher lexical diversity,
syntactic accuracy, and fluency, and the argumentative genre
yielded higher scores in syntactic complexity. The results suggest
that both human and machine assessments were beneficial in
improving written products in EFL contexts. Also, engaging
students in writing various genres is likely to result in improvement
in different CALF aspects.

STATEMENT RQ1 Is there any significant difference between automated writing


evaluation (AWE) and teacher-focused feedback (TF) in the global
writing performance of EFL learners?

RQ2 Is there any significant difference between AWE and TF in


syntactic complexity, accuracy, lexical diversity, and fluency (CALF)
measures in EFL learners’ narrative writings?

RQ3 Is there any significant difference between AWE and TF in


CALF measures in EFL learners’ argumentative writings?

THEORY & 1.​ The efficacy of teachers’ corrective feedback on improving


CONCEPT students’ writing performance has been established by
ample evidence. There is a wealth of studies suggesting
that, as a daily practice, teacher feedback enables improved
performance not only in overall quality of writing (De Smedt
et al., 2016; Cheng et al., 2021; Lv et al., 2021; Zhang &
Zhang, 2022) but also in different aspects and dimensions,
including complexity (Barrot & Gabinete, 2019; Lu & Ai,
2015), accuracy (Barrot & Gabinete, 2019), and fluency
(Fathi & Rahimi, 2022)
2.​ As an alternative to hand-scored writing assessment, AWE
has drawn the attention of EFL teachers and scholars in
recent years. The advantages of using AWE include its
consistency, convenient rating, instant feedback, and
opportunities to produce multiple drafts and revisions
(Stevenson & Phakiti, 2014). AWE systems assist teachers
in providing increased higher-level feedback and expediting
the feedback process, reducing the teacher feedback
burden and enabling them to be more selective in the type
of feedback they deliver (Wilson & Czik, 2016).

METHOD Experimental Research

RESULT This study aimed to investigate how the use of TF and AWE modes
could affect the students’ global writing performance and CALF
measures in an EFL environment.
Overall, both types of feedback were found to positively affect the
students’ global writing performance and CALF measures in L2
writing. After the employment of both feedback modes, the
students’ writings demonstrated a significant improvement in terms
of overall writing performance as well as CALF measures as
compared with their compositions prior to using the feedback.

Regarding accuracy, the finding that the narrative genre yielded


more accurate drafts is consistent with the findings of Way et al.
(2000) but contradicts those of Yoon and Polio’s
(2017) study, which found no genre effect on accuracy.

With respect to fluency, the total number of words in narratives


outweighed those of the argumentative texts. This is not surprising
as the narrative genre is generally a less demanding genre that
involves a simple and frequent lexicon as compared to the
argumentative genre, which is characterized by the use of a less
frequent complex lexicon (Yoon & Polio, 2017).

CONCLUSION The findings of this study provide support for the contribution of
teacher assessment and automated evaluation platforms in the
development of L2 writing. The effectiveness of each type of
feedback can be determined with reference to the type of writing
task, the course’s purpose, and the students’ proficiency level. Like
any other technological tool, AWE is fallible, and decisions on the
selection and use of certain AWE tools should
be made with caution, continuously evaluating these tools’
performance across various EFL contexts. Various studies
encouraged the use of AWE as supplementary to teacher feedback
(Jiang et al., 2020; Link et al., 2022). AWE can be used in
numerous ways, including employing it as a text editor, a scaffold
for teachers, and an interface promoting collaborative written tasks
(Stevenson, 2016).

ARTICLE 10

Titles Timed second language writing performance: effects of perceived


teacher vs perceived automated feedback

AUTHORS Chian-Wen Kao1 & Barry Lee Reynolds


ABSTRACT Automated writing evaluation (AWE) software has received much
attention by researchers and practitioners as it has the potential of
reducing the time necessary for providing second language (L2)
students with written corrective feedback on their writing. While
more practitioners are taking advantage of AWE affordances,
research has indicated students engaged in untimed writing may
still possess reservations about its usefulness as a feedback
provider compared to that of L2 writing teachers. As trust affects
student writer behavior, it is important to understand how the
potential relationship between trust and writing outcomes are
related to the source of writing feedback. This is especially
important in contexts in which high stakes timed writing is the most
frequent type of writing that L2 writers find themselves engaging in.
Using a quasi-experimental design, we addressed these issues by
examining the effects that perceived feedback source had on two
groups of high-intermediate second language writers (n = 61
perceived teacher feedback group; n = 60 perceived AWE
feedback group) enrolled in academic reading and writing courses
in Northern Taiwan. Both groups received the same type of
instruction and feedback for 18 weeks with the only caveat of
perceiving the feedback source as coming from only the teacher or
the AWE software. Results showed that the perceived AWE
feedback group significantly outperformed the perceived teacher
feedback group in their L2 writing by the end of the course. In
addition, the perceived AWE feedback group put more trust in
feedback on grammar rules and lexical choices than the perceived
teacher feedback group. This indicates that high-intermediate L2
writers may trust AWE software more for specific types of feedback,
such as those related to grammar and vocabulary. These and other
results suggest decisions on providing AWE or teacher feedback
should be made based on error type.

STATEMENT 1. How large of an effect do perceptions of feedback sources have


on students’ timed writing performance?
2. To what extent does students’ trust in feedback vary between the
group that are informed the feedback comes from AWE (i.e.,
perceived AWE feedback group) and the group that are informed
the feedback comes from a teacher
(i.e., perceived teacher feedback group)?
THEORY & 1.​ Automated writing evaluation (AWE) and L2 writing. With
CONCEPT the advancement of technology has come the invention of
software collectively referred to as AWE. This software can
assess writing in a multitude of ways that results in the
production of a score and detailed feedback relevant to
processed text (Hockly, 2018) with the potential of serving
the formative purpose of form mediation (Lee, 2017). AWE
is considered more powerful than the traditional spelling and
grammar checkers because it utilizes natural language
processing techniques and machine learning technologies
to produce feedback in addition to correction (Shermis et al.
2013). Although originally developed to support native
writers, AWE is now used by an ever-increasing number of
L2 student writers (Li et al. 2017).
2.​ Untimed writing allows for planning, organizing, correcting,
andediting that could lead to a greater number of words and
better writing qualities (Wu and Erlam, 2016). In contrast,
under a timed condition, students have less time for these
planning and organizing stages to produce L2 writing. While
some research has indicated that the use of AWE tools and
other technology by the students while completing timed
writing can improve L2 writing accuracy (Mujtaba, et al.
2022), more research is needed to
consider whether the feedback provided by such technology
can lead to substantial linguistic improvements when the
feedback is
delivered after writing has been completed—a common test
condition in many L2 writing classrooms (Ranalli, 2021).

METHOD A quasi-experimental design,

RESULT The difference between the perceived teacher feedback and


perceived automated feedback groups in terms of students’
trust in feedback. In this section, students’ trust in feedback on their
writing performance was analyzed to investigate their trust in the
given feedback. As to the perceived teacher feedback group, the
skewness was 0.008 while the kurtosis was −0.797. On the
contrary, as for the perceived automated feedback group, the
skewness was 0.457 while the kurtosis was −0.466. The results
pointed out that the skewness and kurtosis of students’ responses
to the questionnaire were acceptable, showing the data to be
normally distributed. Table 6 provides the descriptive statistics for
the questionnaire responses reported by group.
CONCLUSION This study indicated that the student writers’ perceived automatic
feedback on untimed essays could lead to their writing
improvements on timed essays, which exactly addressed the issue
proposed by Stevenson and Phakiti (2014) that few studies have
proven the AWE feedback effects to transfer to better writing
proficiency. A related issue concerns student writers’ trust in AWE
feedback. The present finding also confirms previous evidence that
students generally trust AWE feedback to be effective for their
writing performance especially for
language accuracy (Dikli and Bleyle, 2014; Shang, 2019). Future
research should be conducted to empirically investigate to what
degree the levels of learner engagement influence the AWE
feedback effects. Finally, this study shed light on several issues
about AWE feedback perception to pave the way to new research
which will help consolidate understanding of AWE feedback.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy