80% found this document useful (5 votes)
4K views13 pages

Assessment in Education Notes

This document defines assessment as the process of gathering information about student learning to improve instruction. It discusses three domains of assessment: cognitive, affective, and psychomotor. Various assessment methods are outlined, including informal observation and formal tests. Formative assessment is used during instruction to guide teaching while summative assessment evaluates learning at the end of a period through tests and grades. Both types of assessment provide important feedback when used appropriately.

Uploaded by

amunime matti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
80% found this document useful (5 votes)
4K views13 pages

Assessment in Education Notes

This document defines assessment as the process of gathering information about student learning to improve instruction. It discusses three domains of assessment: cognitive, affective, and psychomotor. Various assessment methods are outlined, including informal observation and formal tests. Formative assessment is used during instruction to guide teaching while summative assessment evaluates learning at the end of a period through tests and grades. Both types of assessment provide important feedback when used appropriately.

Uploaded by

amunime matti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

Assessment in Education

Postgraduate Diploma in Education and Bed Honours in


Education

By: Sakaria-Lyagwana Iyambo

IUM 2017

Ongwediva Campus

1
Define the term assessment

 Assessment is the process of gathering and discussing information from multiple and
diverse sources in order to develop a deep understanding of what students know,
understand, and can do with their knowledge as a result of their educational
experiences; the process culminates when assessment results are used to improve
subsequent learning.  (Learner-Centered Assessment on College Campuses: shifting the
focus from teaching to learning by Huba and Freed 2000)
 Assessment is the systematic basis for making inferences about the learning and
development of students.  It is the process of defining, selecting, designing, collecting,
analyzing, interpreting, and using information to increase students’ learning and
development.  (Assessing Student Learning and Development: A Guide to the
Principles, Goals, and Methods of Determining College Outcomes by Erwin 1991)
 Assessment is the systematic collection, review, and use of information about
educational programs undertaken for the purpose of improving student learning and
development.  (Assessment Essentials: planning, implementing, and improving
assessment in higher education by Palomba and Banta 1999

Areas of assessment
1.1.1 Cognitive domain
This domain refers to intellectual capability and skills. The six categories of the cognitive
domain are knowledge, comprehension, application, analysis, synthesis, and evaluation and
describe the increasing difficulty in thinking skills expected from the learners as the
knowledge and content becomes more difficult. This domain is the primary learning domain,
because thinking skills are required in both of the following domains.

1.1.2 Affective domain


The affective learning domain addresses a learner's emotions and feelings towards learning
experiences. It is based upon behavioural aspects and may be labelled as beliefs. A learner's
attitudes, interest, attention, awareness, and values are demonstrated by affective behaviours.
The affective domain is critical for learning but is often not specifically addressed. This
domain deals with attitudes, motivation and willingness to participate, whilst valuing what is
being learned and ultimately incorporating the values of a discipline into a way of life. The
stages in this domain are not as sequential as the cognitive domain, but can be described as
receiving, responding, valuing, organizing and internalising values.

1.1.3 Psychomotor domain


The psychomotor domain focuses on performing a sequence of motor activities to a specified
level of accuracy, smoothness, rapidity, or force. This domain functions on the manual and
physical areas and the learners are expected to acquire a skill and to do something. The
learner will produce a product. Underlying the motor activity is cognitive understanding. The
learner must have knowledge and understanding of the content before he/she can demonstrate
a skill in the psychomotor domain.

Different methods and types of assessment

2
1.5.1 Informal assessment
An informal assessment is a method of measuring an individual's performance by casually
watching their behaviour or using other informal techniques. Often the learner is unaware that
he/she is being assessed. Informal assessments are not interested in facts, figures or numbers
but are concerned about content and performance. This type of assessment wants to find out
what learners know or how well they can perform a certain task such as reading. Informal
assessments are used to inform instruction.
Examples of informal assessment tools: Observation, Classwork, Homework

1.5.2 Formal assessment


Formal assessments usually take the form of a standardised test or the grading of a
presentation or a section of reading. An interpretation is made about the performance of the
learner against a set or expected standard. Standard scores or averages can be calculated. The
assessment used needs to match the purpose of assessing. Formal tests should be used to
assess overall achievement, to compare a learner's performance with other learners of the
same age or grade, or to identify comparable strengths and weaknesses with peers.

Examples of formal assessment tools: Standardised test, Examination and Diagnostic test

Types of assessment
1.7.1 Formative assessment
Formative continuous assessment is any assessment made during the school year in order to
improve learning and to help shape and direct the teaching-learning process. Teachers make
frequent, interactive assessments of learner understanding. Formative assessments are an
integral part of the learning process, and include informal assessments. Formative
assessments are primarily to determine what learners have learned in order to plan further
instruction. This enables teachers to adjust their teaching to meet learner needs and to guide
them towards helping the learners to reach expected standards. Findings of formative
assessments can be used to understand what changes need to be made to the teaching and
learning process.

A variety of manageable, appropriate and fair assessment approaches should be used.


Some guidelines to be kept in mind are:
a. Assessment approaches need to be varied and effective. It should include successful
approaches that have been used before.

b. Formative assessment approaches are suitable for obtaining valid, reliable and sufficient
evidence of learner progress.
c. Information is informative and useful to the teacher, parent and learner with respect to
progress and needs.
d. Formative assessments should inform teaching practice by identifying trends and
weaknesses to be addressed with whole groups and individual learners.
e. Identification of trends and weaknesses is consistently accurate and promotes continuous
improvement in assessment.
f. It is used to motivate learners to extend their knowledge and skills, establish sound values,
and to promote healthy habits of study
g. Assessment tasks help learners to solve problems intelligently by using what they have
learned

Benefits of formative assessment

3
a. Flexible
Formative assessments do not have a designated time at which to be implemented. This
flexibility allows teachers to tailor their lessons and assessments to the needs of their learners.
b. Easy to Implement
Because their flexibility, formative assessments are easy to implement. They can be as large
or small, in-depth or general, as needed.
c. Checks for Understanding
Formative assessment can take many shapes. However, in any form, it is an assessment of
understanding. Implementing many formative assessments as the class moves through
material allows a teacher to catch and address any misconceptions theclass or individual
learners may have.
d. Informs Curriculum
Teachers can use the results of formative assessments to inform the curriculum and the
delivery of content. A teacher may choose to spend more time on a specific area in which
many learners struggle, or spend less time on an area with which most students are
comfortable.
e. Assesses the teacher
Formative assessments provide opportunities for teachers to evaluate their own performance.
The results of the assessments can reveal weaknesses or strengths in the delivery of
instruction.

1.7.2 Summative assessment


Summative assessment is an assessment made at the end of the school year based on the
accumulation of the progress and achievements of the learner throughout the year in a given
subject, together with any end-of year tests or examinations. The result of summative
assessment is a single end-of-year promotion grade which documents learners’ achievements
usually over the period of a year. Summative assessments confirm competence against the
standard, and are formal assessments. Summative assessments do not imply tests or
examinations alone, but rather the most appropriate way of gathering the required evidence of
competence. Summative assessments are an integral part of the learning process, and are
informed by an understanding of the various purposes of summative assessment as they affect
learners within and beyond the school system.

Some guidelines to keep in mind are:


a. Summative assessments are planned, recorded and reported in ways that promote the
credibility of the assessment system
b. Summative assessments make use of a variety of manageable, appropriate and fair
assessment approaches that are suitable for summative decisions
c. Summative assessment methodologies are appropriate to the syllabus objectives being
assessed. They are capable of producing valid evidence in relation to the assessment
objectives. This includes the use of practical assessments to assess practical skills where
required
d. Summative assessments draw upon evidence from formative assessments where
appropriate and where practical, thus promoting the value of continuous assessment
e. A range of question techniques is employed to enhance the assessment of understanding.
There is greater use of open rather than closed questions
f. Learners are involved and guided in the on-going assessment of their own learning
g. Involvement is meaningful and contributes to the effectiveness
of assessment

4
h. Summative assessment decisions are consistent with decisions made about similar
evidence from other learners. Decisions are justified by valid, authentic and sufficient
evidence presented by and about learners
i. Summative assessment results are interpreted fairly and accurately and in line with national
assessment and promotion policies. Interpretations help to assess and promote learning and to
modify instruction in order to encourage the continuous development of learners
j. Results are interpreted in the light of previous results and experience. Interpretations
provide useful insight into learning and foster continuous improvement of practice.
k. Records of the assessment meet the quality requirements of the school.
.
Benefits of summative assessment
a. Development of a standardized (consistent) set of information about each learner’s
achievement
b. Help in the determination of key learning goals and teaching responsibilities
c. Combine test scores and make educational decisions based on this information
d. Create rationale for large-scale educational decision-making
e. Acknowledgement of a job well done
30
1.7.3 Self-assessment
Self-assessment refers to the assessment of activities within and outside the classroom that
enable learners to reflect on what they have learnt and to evaluate their learning against a set
of assessment criteria. It describes the process of a learner gaining an understanding of how
he/she learns as opposed to what he/she is learning. It guides the learner to greater
understanding of him-/herself as learner.

 Advantages of self-assessment to learners


a. Learners become able to identify their own learning needs.
b. They get to know their strengths and weaknesses
c. They see how they are doing
d. It makes them think
e. They get to know why and when their work is good
f. They know what to do to improve their work and/or learning
g. It helps them to remember and understand better
h. It encourages them to become responsible for their own learning
i. Learners are able to recognise next steps in learning
j. Learners feel secure about not always being right
k. It raises self-esteem and learners become more positive e.g. “I can” as opposed to “I can’t”
l. Learners are actively involved in the learning process (partnernot recipient)
m. Learners become more independent and motivated
n. The learner recognises difficulties as a true sign of learning
o. They see that others have the same problems
p. They develop an enthusiasm for reflection
q. Their learning improves – they concentrate on “how” rather than “what” they learn.

 Advantages of self-assessment to the teachers


a. There is a shift of responsibility from teacher to pupil
b. Smoother, more efficient lessons if pupils are motivated and independent
c. Feedback helps teachers identify pupil progress
d. It identifies the next step/s for a group/individual

5
e. It matches pupils’ perceptions of understanding with that of teachers – pupils explain
strategies and in this way the teacher identifies their thinking process
f. More efficient lessons will allow greater challenge

 Disadvantages of self-assessment
a. It puts more demands on the workload of the teacher, because it takes time for the learners
to become skilled in self-assessment. While the learners learn how self-assessment works, the
teacher has to guide them, which places more demands on the time of the teacher.
b. There is a risk of grades being inflated or unreliable
c. Learners feel ill equipped to do the assessment or do not have enough confidence in
assessing themselves
1.7.4. Peer assessment
Peer-assessment is nearly the same as self-assessment, except that learners are explicitly
involved in helping each other to identify the standards and criteria, and making judgements
about each other's work in relation to those criteria.

 Advantages of peer assessment


a. Brings learners in contact with learning content and helps them to view learning as non-
threatening
b. Encourages co-operation between learners
c. Encourages active learning
d. Learners receive prompt feedback often directly after the completion of the assessment
activity
e. Learners learn to respect diverse capabilities, talents and ways of learning
 Disadvantages of peer assessment
a. It takes time to explain to learners how peer assessment works and what is expected of
them.
b. It takes time for the learners to become skilled in the assessment of peers.
c. There is a risk of peers giving inflated marks because of friendships or peer pressure.
d. Learners have a tendency to give everybody the same mark.
e. Learners feel ill equipped to do the assessment, especially if they have not done it before
f. Learners may feel reluctant to make judgements over the work of their peers
g. It can happen that learners can discriminate against another learner and give a lower mark.

Similarities between peer- and self- assessment


There are two primary essential components of strategies for success when implementing
peer and self-assessment. These are:
a. Learners have to be involved in the process of identifying standards and/or criteria by
which their work, and that of their peers, will be judged.

b. Learners have to be involved in the process of making judgements about the extent to
which their work, and the work of fellow learners has or has not met the identified standards
and/or criteria.

1.7.5. Diagnostic Assessment


Diagnostic assessment can help you identify your students’ current knowledge of a subject,
their skill sets and capabilities, and to clarify misconceptions before teaching takes place.
Knowing students’ strengths and weaknesses can help you better plan what to teach and how
to teach it.

6
Types of Diagnostic Assessments
 Pre-tests (on content and abilities)
 Self-assessments (identifying skills and competencies)
 Discussion board responses (on content-specific prompts)
 Interviews (brief, private, 10-minute interview of each student)

1.7.6. Continuous assessment

• Continuous assessment is regular assessment of the learning performance related to a


course module and that is separate from examinations, and accompanied by regular feedback.
• Continuous assessment can take various forms, depending on the final objectives and
competencies. A few examples:
Regular observation of practical skills or attitudes, e.g. nursing skills, your team’s
collaboration skills, collaboration during tutorials, etc.
Regular feedback on your portfolio, paper, etc.
Regular assessment of your verbal language skills.
Regular testing of your insight into theoretical concepts.

• Continuous assessment can take place within various types of contact moments, e.g.
practical, workshops, lectures, placements, projects, cases, etc.
• Continuous assessment is the result of the continuous assessment of the learning
performance on a course module. The assessment task can verify which developmental
process you are going through. The continuous assessment (partially) counts towards the final
mark for the course module.
• Continuous assessment often goes hand in hand with information about: the
assessment criteria, how you performed, what went smoothly, what went less smoothly, and
the things you still have to work on.

Different types of questions

1. Objective, which require students to select the correct response from several alternatives
or to supply a word or short phrase to answer a question or complete a statement.

Examples: multiple choices, true-false, matching, completion

2. Subjective or essay, which permit the student to organize and present an original
answer

Examples: short-answer essay, extended-response essay, problem solving, performance test


items

This source also suggests guidelines for choosing between them:

7
Subjective questions are appropriate when:

 The group to be tested is small and the test is not to be reused


 You wish to encourage and reward the development of student skill in writing
 You are more interested in exploring student attitudes than in measuring his/her
achievement

Objective tests are appropriate when:

 The group to be tested is large and the test may be reused.


 Highly reliable scores must be obtained as efficiently as possible.
 Impartiality of evaluation, fairness, and free from possible test scoring
influences are essential.

Either essay or objective tests can be used to:

 Measure almost any important educational achievement a written test can


measure
 Test understanding and ability to apply principles.
 Test ability to think critically.
 Test ability to solve problems.

Types of validity in assessment

Validity: Defined
The term validity has varied meanings depending on the context in which it is being used.
Validity generally refers to how accurately a conclusion, measurement, or concept
corresponds to what is being tested. For this lesson, we will focus on validity in assessments.
Validity is defined as the extent to which an assessment accurately measures what it is
intended to measure. The same can be said for assessments used in the classroom. If an
assessment intends to measure achievement and ability in a particular subject area but then
measures concepts that are completely unrelated, the assessment is not valid.

Types of Validity
There are three types of validity that we should consider: content, predictive, and construct
validity. Content validity refers to the extent to which an assessment represents all facets of
tasks within the domain being assessed. Content validity answers the question: Does the
assessment cover a representative sample of the content that should be assessed?
For example, if you gave your students an end-of-the-year cumulative exam but the test only
covered material presented in the last three weeks of class, the exam would have low content
validity. The entire semester worth of material would not be represented on the exam.
Educators should strive for high content validity, especially for summative assessment
purposes. Summative assessments are used to determine the knowledge students have gained
during a specific time period.

8
Content validity is increased when assessments require students to make use of as much of
their classroom learning as possible.
The next type of validity is predictive validity, which refers to the extent to which a score on
an assessment predicts future performance.
 Construct validity is used to determine how well a test measures what it is supposed to
measure. In other words, is the test constructed in a way that it successfully tests what it
claims to test?
Construct validity is usually verified by comparing the test to other tests that measure similar
qualities to see how highly correlated the two measures are. For example, one way to
demonstrate the construct validity of a cognitive aptitude test is by correlating the outcomes
on the test to those found on other widely accepted measures of cognitive aptitude.

Factors That Impact Validity


Before discussing how validity is measured and differentiating between the different types of
validity, it is important to understand how external and internal factors impact validity.
A student's reading ability can have an impact on the validity of an assessment. For
example, if a student has a hard time comprehending what a question is asking, a test will not
be an accurate assessment of what the student truly knows about a subject. Educators should
ensure that an assessment is at the correct reading level of the student.
Student self-efficacy can also impact validity of an assessment. If students have low self-
efficacy or beliefs about their abilities in the particular area they are being tested in, they will
typically perform lower. Their own doubts hinder their ability to accurately demonstrate
knowledge and comprehension.
Student test anxiety level is also a factor to be aware of. Students with high test anxiety will
underperform due to emotional and physiological factors, such as upset stomach, sweating,
and increased heart rate, which leads to a misrepresentation of student knowledge.

9
Reliability

Reliability is the degree to which an assessment tool produces stable and consistent results.

Types of Reliability

1. Test-retest reliability is a measure of reliability obtained by administering the same


test twice over a period of time to a group of individuals.  The scores from Time 1 and
Time 2 can then be correlated in order to evaluate the test for stability over time.  

Example:   A test designed to assess student learning in psychology could be given to a


group of students twice, with the second administration perhaps coming a week after the
first.  The obtained correlation coefficient would indicate the stability of the scores.

2. Parallel forms reliability is a measure of reliability obtained by administering


different versions of an assessment tool (both versions must contain items that probe
the same construct, skill, knowledge base, etc.) to the same group of individuals.  The
scores from the two versions can then be correlated in order to evaluate the
consistency of results across alternate versions. 

Example:   If you wanted to evaluate the reliability of a critical thinking assessment, you
might create a large set of items that all pertain to critical thinking and then randomly
split the questions up into two sets, which would represent the parallel forms. 

3. Internal consistency reliability is a measure of reliability used to evaluate the degree


to which different test items that probe the same construct produce similar results.

Factors affecting reliability in assessment

1. Test length: Generally, the longer a test is, the more reliable it is.
2. Speed: When a test is a speed test, reliability can be problematic. It is inappropriate to
estimate reliability using internal consistency, test-retest, or alternate form methods. This is
because not every student is able to complete all of the items in a speed test. In contrast, a
power test is a test in which every student is able to complete all the items.
3. Group homogeneity: In general, the more heterogeneous the group of students who
take the test, the more reliable the measure will be.
4. Item difficulty: When there is little variability among test scores, the reliability will
be low. Thus, reliability will be low if a test is so easy that every student gets most or all of
the items correct or so difficult that every student gets most or all of the items wrong.
5. Objectivity: Objectively scored tests, rather than subjectively scored tests, show a
higher reliability.
6. Test-retest interval: The shorter the time interval between two administrations of a
test, the less likely that changes will occur and the higher the reliability will be.
7. Variation with the testing situation: Errors in the testing situation (e.g., students
misunderstanding or misreading test directions, noise level, distractions, and sickness) can
cause test scores to vary

10
Methods used to determine the validity and reliability in assessment

How do you determine if a Test has Validity, Reliability?

Validity

Validity is arguably the most important criteria for the quality of a test. The term validity
refers to whether or not the test measures what it claims to measure. On a test with high
validity the items will be closely linked to the test's intended focus. For many certification
and licensure tests this means that the items will be highly related to a specific job or
occupation. If a test has poor validity then it does not measure the job-related content and
competencies it ought to. When this is the case, there is no justification for using the test
results for their intended purpose.

There are several ways to estimate the validity of a test including content validity, concurrent
validity, and predictive validity. The face validity of a test is sometimes also mentioned.

2. Reliability

Reliability is one of the most important elements of test quality. It has to do with the
consistency, or reproducibility, or an examinee's performance on the test.

For example, if you were to administer a test with high reliability to an examinee on two
occasions, you would be very likely to reach the same conclusions about the examinee's
performance both times. A test with poor reliability, on the other hand, might result in very
different scores for the examinee across the two test administrations. If a test yields
inconsistent scores, it may be unethical to take any substantive actions on the basis of the test.
There are several methods for computing test reliability including test-retest reliability,
parallel forms reliability, decision consistency, internal consistency, and interrater reliability.
For many criterion-referenced tests decision consistency is often an appropriate choice.
Benjamin Bloom introduced Bloom’s Taxonomy in 1956. The initial focus was primarily for
academia and now finds a comfortable place in training. Bloom and associates identified three
domains of learning:

11
Bloom’s Taxonomy

Benjamin Bloom introduced Bloom’s Taxonomy in 1956. The initial focus was primarily for
academia and now finds a comfortable place in training. Bloom and associates identified
three domains of learning:

1. Cognitive: mental skills, intellectual capability (knowledge)


2. Affective: feelings, motivation, behaviour (attitude)
3. Psychomotor: manual or physical skills (skills)

These are sometimes identified as “Do-Think-Feel” or KSA (Knowledge, Skills, and


Attitude).

In this blog, the focus is on the cognitive domain and the application of the six levels of
Bloom’s Taxonomy. These levels represent a hierarchy of learning that goes from the simple
(level 1) to the complex (level 6). The levels are as follows:

1. Knowledge – to check learner ability to recall basic information


2. Comprehension – confirm understanding
3. Application – use or apply knowledge
4. Analysis – interpret elements; see if the information can be broken into components
5. Synthesis – create or develop plans
6. Evaluation – assess, critical thinking

Now that we have defined the six levels, let’s look at how they can be applied to instructional
design. Lynne’s blog explained how Bloom’s Taxonomy could be used in structuring
questions; this blog will add how it applies to the testing process.

1. Knowledge – to check learner ability to recall basic information

This is usually assessed using a non-performance test that checks for knowledge of the
information the learner has been taught. This is accomplished through quizzes using assorted
multiple choice, matching, or true/false questions. You want the learner to define, repeat,
recall from memory, list, etc. the information he/she has learned. (e.g. List the six steps of
Langevin’s learning strategy.)

2. Comprehension – confirm understanding

This next level is also a non-performance check for knowledge, but now you want the learner
to “put it in their own words” by describing, explaining, discussing, etc. the information
he/she has been taught. (E.g. describe the six steps of the learning strategy.)

3. Application – use or apply knowledge

Here, the focus is on performance-based assessment. You have the learner apply, interpret,
practice, etc. the information he/she has been taught. (e.g. Create a brief lesson using the
learning strategy that you will present to the group. You must use all six steps.)

12
4. Analysis – interpret elements; break the information into smaller parts

For this level, you ask the learner to compare, investigate, solve, examine, tell why, etc.

(e.g. This is an outline for a course, which was not received well by the
learners. Compare this to the learning strategy; identify which part(s) of the learning strategy
were omitted, and how this omission contributed to the course not being successful.)

5. Synthesis – create/develop plans; put pieces together to form a new whole

Here, you have the learner suppose, create, construct, improve, etc. (e.g. this is a handout of a
course that is structured according to the learning strategy. It follows the six steps, but is not
as dynamic as is could be. What would you add to each step to create a more dynamic course
that gets the learner involved?)

6. Evaluation – assess, critical thinking

In this final level of Bloom’s Taxonomy, you ask the learner to offer opinions, criticize,
judge, recommend, justify, evaluate, or explain which option is better, based on a set of
knowledge and criteria. (e.g. You have examples of two courses that use the learning
strategy. First, compare the examples against the learning strategy, then compare one
example against the other. Determine which one best exemplifies the learning strategy. Be
prepared to present your decision to the table group.

13

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy