Assignment 1 rtl2

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

Assignment 1 Literature Review

Australian Curriculum, Assessment and Reporting Authority (ACARA), (2016) is the


peak Australian body that oversees improving the learning of all Australian students from K-
12 years through preparing school curriculum, assessment and reporting. In assessing the
student’s outcomes ACARA has introduced NAPLAN which is an annual test for years 3, 5, 7
and 9 with the view of collating information about the measures of school’s literacy and
numeracy levels. They are provided to stakeholders like policy makers, schools and parents
to make informed decisions. The benefits of NAPLAN according to ACARA is to provide
increased accountability for the stakeholders and driving improvements in student learning
outcomes (ACARA, 2016). Much has been written about the advantages and disadvantages
of NAPLAN (Mayes & Howell, 2018; Hardy, 2014; Rose, Low-Choy, Singh & Vasco, 2018). There
is also much debate about how effective it is as an assessment regime used in assessing
students and comparing the students across Australian secondary schools. This paper reviews
contemporary literatures on how effective NAPLAN as an assessment regime and their
overarching view on how well it is in assessing student achievement.
A view raised by (Creagh, 2016; Johnston, 2017; Thompson, G., Adie, L., & Klenowski,
V., 2018) is that NAPLAN is a poor indicator of testing literacy and numeracy levels of students
in Australia. They independently present their arguments predominately from a narrow point
of view, choosing to focus on the statistical validity of the data collated by ACARA that is then
used to provide information about NAPLAN results and then presented on the MySchool
website. ACARA does this by comparing results across like schools, and the literature reviews
on NAPLAN suggests that the results are at times misleading.
A quantitative research project carried out in Queensland by Creagh, (2016) in 2010
to compare the NAPLAN data across a range of Queensland schools found that NAPLAN
results show that “students from non-English speaking backgrounds outperformed English
speaking students on most test domains” (p. 252). The major point raised by Creagh, (2016)
is the actual definition of a category in the set of data collected from the NAPLAN results and
labelled as Language Background Other than English (LBOTE). This category included ESL
students, including refugee students and those who speak a language other than English at
home either student or parent. What this category failed to collect is information on the
country of birth of each student, the length of time in Australia, the status of the visa category
they were on, the language that the student spoke at home, the ethnic background of the
student and the parents. Creagh, (2016) completed the quantitative research with those
categories included. The results of the study contradicted those of NAPLAN. It was clear that
NAPLAN data presented to the stakeholders were not accurate. This was because LBOTE was
a poor indicator in capturing the performance of students who undertook the NAPLAN test.
It would be a straight forward fix for ACARA to rectify, but there seems to be a lacklustre
emphasis the definition of the LBOTE category. The reason in my opinion is both political and
financial. There is the National Partnership Agreement in which states receive a lump sum
funding if educational outcomes are shown to have been improved. The LBOTE category in its
current form provides the favourable outcomes that schools want to see in order to scoop in
the funding for their schools. Politicians might keep this category for the purpose of hiding
true disadvantaged students in order to sell the favourable results and tie it to their policy
decision successes. Regardless of the reasons, I don’t agree with Creagh, (2016) that NAPLAN
is a poor indicator of measuring student literacy and numeracy outcomes because the results,
regardless of who the student is should be a snap shot of the achievement of the student and
the school at a given point of time. Students who sit the NAPLAN can be assessed during the
next NAPLAN assessment they sit, and the improvement is captured on the MySchool
website.

Another article that argues that NAPLAN is not an accurate form of assessment
program for schools in Australia for literacy and numeracy is by Thompson et al., (2018). The
arguments raised in the article revolves around the validity of the data which according to
American Educational Research Association, American Psychological Association & National
Council on Measurement in Education (as cited in Thompson et al., 2018) in statistical terms
validity of data is the most ‘fundamental consideration in developing an evaluation of tests
(p. 762). The reason for the questionable validity of the data that NAPLAN offers is tied to the
participation rates. The argument is succinctly put as follows: the participation can affect
validity of comparative claims even when participation rates are around the 95% (Thompson
et al., 2018, p. 766). In support of Thompson et al., (2018) argument he cites Margaret Wu (p.
766), who also questions the reliability of NAPLAN scores because most Australian schools
have low participation rates. She argues that most Australian schools have less than 100
students for every year level and a significant number with participation rates less than 30
students for each year level. Again, this argument about NAPLAN being a poor assessment
regime because of statistical reasons, whilst is valid, should not discredit NAPLAN as an
effective assessment program. Whilst the data on the number of participants at a school is
low the information collated does not need to be statistically significant to make comparisons
on the strengths and weakness of students results. ACARAs purpose is to provide information
to drive improvement, and the results of schools for is valuable for aligning pedagogical
adjustments to assist in the improvement of literature and numeracy.

The same theme in another article by Johnston, (2017) appears to argue the validity
of data. Johnston, (2017) conducted a research project on seven independent schools in NSW
while analysing NAPLAN data to see if schools can enhance their literacy results. The research
project found that using NAPLAN data at the whole-school level and year cohorts were
difficult because the data were rendered statistically invalid (p.21). It seems that again,
statistical data is at the forefront to validate the creditability of NAPLAN and render it flawed.
Johnston, (2017) goes on to say that in its current form NAPLAN ignores student difference in
ability, skill, disability and social class (p.25). Whilst this might sound convincing, what is
missed in all of this, is that NAPLAN does not seek to test for differentiation or inclusion, rather
it is a snapshot of where schools are at in comparisons with other like schools in literacy and
numeracy skills.
In concluding with the contemporary literature review about how effective NAPLAN is
in assessing students in secondary Australian schools, there is an over emphasis on meeting
conditions relating to statistical norms and less emphasis on the overall snapshot results of
the literacy and numeracy measures that NAPLAN provides. The overwhelming criticism
seems valid from a statistical point of view, but the NAPLAN test results need to be looked at
as comparing schools and not individual student performance. Hence, I conclude in saying
that NAPLAN is good and effective assessment program in Australian secondary schools when
comparing the outcomes of one school against another like schools. This is supported by the
Senate Standing Committee on Education Employment and Workplace Relations, as cited in
Johnston, (2017) who found that there were some unintended consequences of NAPLAN in
its current form but did not cite statistical reasons.
Reference

Australian Curriculum, Assessment and Reporting Authority. (2016). Assessment: 2016.


Retrieved from https://www.acara.edu.au/assessment
Creagh, S. (2016) ‘Language Background Other Than English’: a problem NAPLaN test category
for Australian students of refugee background, Race Ethnicity and Education, 19:2, 252-
273, DOI: 10.1080/13613324.2013.843521

Hardy, I. (2014) A logic of appropriation: enacting national testing (NAPLAN) in Australia,


Journal of Education Policy, 29:1, 1-18, DOI: 10.1080/02680939.2013.782425

Johnston, J. (2017). Australian NAPLAN testing: In what ways is this a "wicked"


problem? Improving Schools, 20(1), 18-34.
doi:http://dx.doi.org.ezproxy.uws.edu.au/10.1177/1365480216673170

Mayes, E., & Howell, A. (2018) The (hidden) injuries of NAPLAN: two standardised test events
and the making of ‘at risk’ student subjects, International Journal of Inclusive Education,
22:10, 1108-1123, DOI: 10.1080/13603116.2017.1415383

Rose, J., Low-Choy, S., Singh, P., & Vasco, D. (2018): NAPLAN discourses: a systematic review
after the first decade, Discourse: Studies in the Cultural Politics of Education, DOI:
10.1080/01596306.2018.1557111

Thompson, G., Adie, L., & Klenowski, V. (2018) Validity and participation: implications for
school comparison of Australia’s National Assessment Program, Journal of Education
Policy, 33:6, 759-777, DOI: 10.1080/02680939.2017.1373407
Action Research Protocol

In order to find out if NAPLAN is an effective assessment program in Australian


Secondary Schools, I will conduct Action Research Protocol by interviewing interested
stakeholders at an Independent Western Sydney Secondary School. The interview questions
will revolve around the overarching theme grasped from the literature review. I will attempt
to find out from the interview questions if NAPLAN is an effective assessment strategy and in
their own opinion if the validity of data is a course of concern. Whilst I will not raise the validity
of data directly there will be a question for them to provide feedback about the effectiveness
of NAPLAN data.

The stakeholders which will be interviewed include a principal, head of senior school,
three heads of departments and five other teachers from the English and Mathematics
departments. The overwhelming theme from the literature review was that NAPLAN was not
an effective assessment program and most of the articles reviewed cited validity of the data
collected. I’m expecting some replies in relation to data captured by ACARA in completing the
MySchool website as being unreliable. But in order to avoid steering the participants to giving
me replies around data validity, there will be no direct mention of data validity in the
questions.

A participation consent form will be provided for them to sign in order to engage in
the interview questions along with an information sheet about the background of the
research undertaken and the purpose of the interview. They will have the option to consent
and there is no obligation to answer all the questions during the interview, so they have the
option to not answer some questions. There is an ethical angle to the consent form and the
information sheet that is provided, which is any questions that might put them in harm’s way
by way of potentially losing a job or give them unnecessary stress when commenting on
NAPLAN and their school, they’ll have the option not to answer to avoid any such scenario’s.
The information sheet accompanying the consent participation form will also note that all
participants and the school will be de-identified when reporting on the result of the interview
in the interest of keeping them anonymous and out of respect for their privacy concerns that
might be raised.

The actual interview will occur at the school and written down with no recordings.
This will only take place after they have signed a consent form and read the information sheet.
The interview will consist of six questions targeted at finding if they use NAPLAN data through
the MySchool website, some probing questions about the effectiveness of NAPLAN, a direct
question about the intended misuse of NAPLAN, some follow up questions and the last
questions will be targeted at getting their opinion on NAPLAN as an assessment program
compared to other testing programs. The aim is to unlock the knowledge of NAPLAN and its
effectiveness as an assessment program directly from stakeholders at the Western Sydney
Secondary School. This will be done by tabling the results into groups of NAPLAN as an
Effective Assessment Program, NAPLAN as an Ineffective Assessment Program and
Miscellaneous feedback grouped either positive or negative feedback in relation to NAPLAN
as an assessment program.
The questions are listed below:

Question 1 (introducing question)

Have you used MySchool website to monitor the improvement of student in literacy and
numeracy?

Question 2 (probing question)

Can you tell me about how effective NAPLAN is as an assessment program for the literacy
and numeracy in Secondary Australian Schools?

Question 3 (Direct question)

Have you ever given instructions or followed instructions to teach students for the NAPLAN
test?

Question 4 (Follow up questions to question 3)

Could you say that there are unintended consequences of that you know of arising from the
NAPLAN results like: political or financial?

Question 5 (Probing question to question 3)

Are you able to provide more details about the unintended consequences?

Question 6 (Opinion question)

How does NAPLAN compare to previous testing regime?

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy