0% found this document useful (0 votes)
6 views

Notes

The document outlines the complexities of modern assessment in education, emphasizing the need to evaluate cognitive, affective, and psychomotor skills. It details various types of assessments, their purposes, and the principles of high-quality assessment, including reliability and validity. Additionally, it discusses the importance of aligning learning outcomes with teaching methods and assessment techniques to enhance the learning experience.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Notes

The document outlines the complexities of modern assessment in education, emphasizing the need to evaluate cognitive, affective, and psychomotor skills. It details various types of assessments, their purposes, and the principles of high-quality assessment, including reliability and validity. Additionally, it discusses the importance of aligning learning outcomes with teaching methods and assessment techniques to enhance the learning experience.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

Lecture 1: Basic Concepts in Assessment

“Teaching today is a far more complicated profession than it was a couple of


decades ago. The dynamically changing pace of development in science and
technology has brought about corresponding changes in the way we view
teaching. Thus, while earlier on, we always have been contenting with testing
and measuring students’ cognitive skills through teacher-made test. Today, we
not only test to measure cognitive development, but also the affective and
psychomotor development of the learners as well. The teacher is expected to
be fully knowledgeable of the measurement and assessment procedures that
support testing in the three domains which are cognitive, affective, and
psychomotor.”
–Doctor Roberto Padua

Test is a set of questions with an accepted set of presumably correct answers,


designed to gather information about some individual characteristics like
academic achievement.

Non-Test are usually based on teacher’s direct observations as students


perform the assigned tasks. Some examples are observation, checklists, rating
scales, anecdotal records, debates, demonstrations, journals, panel discussion,
portfolio of student’s work, and the like.

Testing is the process of administering a test to an individual or a group of


students. This process involves steps such as test preparation, test
administration, and test collection.

Measurement is the process of quantifying test results.

Assessment is the process of gathering information about student learning and


then analyzing and interpreting them for the purpose of making decisions.

Evaluation is the process of determining the changes in the child as a result of


teaching and his experiences.

Type of Tests
(De Guzman and Adamos, 2015)

Mode of Response:
 Oral test
 Written test
 Performance test

Ease of Quantification of Response:


 Objective test
 Subjective test

Test Constructor:
 Standardized test
 Non-standardized test
Interpreting Results:
 Tests that yield norm-referenced interpretations
 Tests that allow criterion-referenced interpretations

Nature of Response:
 Personality test
 Achievement test
 Intelligence test

Roles of Assessment:
 Placement role is used to determine the entry behavior of students. It
is also used to determine students’ performance at the beginning of
instruction.
 Formative role is an assessment during the course of instruction rather
than after it is completed. Your on-going observation and monitoring of
students’ learning while you teach informs you what to do next.
 Diagnostic role aims to determine the scientific learning needs of
students so that those needs can be met through regular or remedial
instruction.
 Summative role is used to determine the mastery at the end of the
course. It is the process of making the overall assessment or decision
about the program.

Purposes of Assessment:
 Assessment for Learning pertains to diagnostic and formative
assessment tasks which are used to determine learning needs and
monitor academic progress of students during a unit or a block of
instruction and guide instruction.
 Assessment as Learning employs tasks or activities that provide
students with an opportunity to monitor and further their own learning,
to think about their personal learning habits and how they can adjust
their learning strategies to achieve their goals.
 Assessment of Learning is summative and done at the end of a unit,
task, process or period. Unit tests and final projects are typical
examples of summative assessment. Assessment of learning is used
for grading, evaluation, and reporting purposes.

Lecture 2: Principles of High-Quality Assessment

Principles of High-Quality Assessment:


1. Clarity of learning outcomes
2. Appropriateness of assessment methods
3. Reliability
4. Validity
5. Practicality and efficiency
6. Fairness
7. Ethics and positive consequences

Clarity of Learning Outcomes


A learning outcome is a statement of what a learner is expected to know,
understand and be able to do at the end of a period of learning and of how
that learning is to be demonstrated (Moon, 2002)

Learning outcomes are statement of what is expected that the student will be
able to do as a result of learning the activity (Jenkins and Unwin, 2001)

One of the great advantages of learning outcomes is that they are clear
statements of what the learner is expected to achieve and how he or she is
expected to demonstrate that achievement.

Before beginning to write the learning outcomes in the appropriate format, it


is important to understand the learning domains and the taxonomy of the
levels of learning represented in these domains, such as cognitive, affective,
and psychomotor.

Cognitive Domain: Revised Bloom’s Taxonomy

Remembering
 Recalling information

Sample Verbs:
 Choose, cite, enumerate, identify, label, list, locate, name, outline,
quote, recall, recognize, retrieve, repeat, select, sort, state, underline,
and write

Understanding
 Explaining ideas or concepts

Sample Verbs:
 Abstract, categorize, clarify, classify, compare, conclude, contrast,
construct, draw, exemplify, explain, extrapolate, generalize, give,
illustrate, infer, interpret, interpolate, instantiate, map, match,
paraphrase, predict, represent, summarize, and translate

Applying
 Using information in another familiar situation

Sample Verbs:
 Add, apply, calculate, carry out, compute, divide, dramatize, draw,
execute, implement, manipulate, multiply, paint, practice, sequence,
show, subtract, translate, and use

Analyzing (Critical Thinking)


 Breaking information into parts to explore understandings and
relationships

Sample Verbs:
 Analyze, attribute, cohere, compare, contrast, criticize, deconstruct,
determine, differentiate, discriminate, distinguish, find, focus, integrate,
organize, outline, select, separate, scrutinize, and structure
Evaluating (Critical Thinking)
 Justifying a decision or course of action

Sample Verbs:
 Appraise, argue, assess, choose, check, conclude, coordinate, criticize,
critique, debate, decide, deduce, defend, detect, discriminate, evaluate,
infer, judge, justify, measure, monitor, predict, probe, rank, rate,
recommend, revise, and score

Creating (Critical Thinking)


 Generating new ideas, products or ways of viewing things

Sample Verbs:
 Act, assemble, blend, combine, compile, compose, construct, create,
design, develop, devise, formulate, forecast, generate, hypothesize,
imagine, invent, organize, originate, predict, plan, prepare, propose,
produce, and set up

Affective Domain: Revised Bloom's Taxonomy

The affective domain includes the manner in which we deal with things
emotionally, such as feelings, values, appreciation, enthusiasms, motivations,
and attitudes.
Receiving (Awareness)

Sample Verbs:
 Accept, acknowledge, ask, attend, describe, explain, follow, focus,
listen, locate, observe, realize, receive, recognize, and retain

Responding (React)

Sample Verbs:
 Behave, cite, clarify, comply, contribute, cooperate, discuss, examine,
follow, interpret, model, perform, present, question, react, respond,
show, and studies

Valuing (Comprehend and act)

Sample Verbs:
 Accept, adapt, argue, balance, challenge, choose, confront, criticize,
debate, differentiate, defend, influence, justify, persuade, prefer,
recognize, refute, seek, and value

Organizing (Personal Value System)

Sample Verbs:
 Adapt, adjust, alter, arrange, build, change, compare, contrast,
customize, develop, formulate, improve, manipulate, modify, practice,
prioritize, reconcile, relate, and revise
Internalizing (Characterization by a value)

Sample Verbs:
 Act, authenticate, characterize, defend, display, embody, habituate,
influence, internalize, practice, produce, represent, solve, validate, and
verify

Psychomotor Domain: Simpson's Taxonomy (1966)

The psychomotor domain includes physical movement, coordination, and use


of the motor-skill areas.

Perception
 The ability to use sensory cues to guide motor activities

Sample Verbs:
 Choose, describe, detect, differentiate, distinguish, identify, isolate,
relate, and select

Set
 Readiness to act

Sample Verbs:
 Begin, display, explain, move, proceed, react, show, state, and
volunteer

Guided Response
 The early stages in learning a complex skill that includes imitation and
trial and error

Sample Verbs:
 Copy, trace, follow, react, reproduce, and respond

Mechanical Response (Basic proficiency)

Sample Verbs:
 Assemble, calibrate, construct, dismantle, display, fasten, fix, grind,
heat, manipulate, measure, mend, mix, organizes, and sketch

Complex Overt Response (Expert)

Sample Verbs:
 The key words are the same as mechanism, but will have adverbs or
adjectives that indicate that the performance is quicker, better, and
more accurate

Adaptation
 Skills are well developed and the individual can modify movement
patterns to fit special requirements

Sample Verbs:
 Adapt, alter, change, rearrange, reorganize, revise, and vary

Origination
 Creating new movement patterns to fit a particular situation or specific
problem

Sample Verbs:
 Arrange, build, combine, compose, construct, create, design, initiate,
make, and originate

Benefits of Learning Outcomes:


1. Better learning
2. Increased motivation
3. Better performance on assignments and tests
4. Focused and strategic teaching
5. Strategic assessment
6. Attention to outputs

Common Problems with Learning Outcomes

The Sinister Sixteen:


 Verbs that are internal and/or otherwise unobservable. The most
common verbs and phrases Potter and Kustra (2012) see in learning
outcomes that are all unacceptable are understand, appreciate,
comprehend, grasp, know, see, accept, learn, be familiar with, have a
knowledge of, be conscious of, perceive, apprehend, value, and get.

Lecture 3: Aligning Learning Outcomes, Teaching Methods, and Assessment


Techniques

This connection between learning outcomes, teaching methods, and


assessment helps to make the overall learning experience more transparent.

It is important that the assessment tasks mirror the learning outcomes since as
far as the students are concerned, the assessment is the curriculum, from our
students’ point of view, assessment always defines the actual curriculum
(Ramsden, 2003)

Teacher and Student Perspectives Regarding Assessment (Biggs, 2003)

Teacher perspective:
1. Objectives
2. Desired learning outcomes
3. Teaching activities
4. Assessment

Student perspective:
1. Assessment
2. Learning activities
3. Outcomes
The Three Essentials of Alignment

1. Clearly define the general intended learning outcomes of the course.


a. Describe what the student should be able to do, not what the teacher
is expected to do.
b. Describe the intended product or result, not the intended process.
c. Focus on the task the learner is expected to perform, rather than on
specific topics or subject matter content.
d. Define only one intended outcome in each objective.
e. Select the proper level of generality. Do not be too specific, but also
avoid being too general. List these intended learning outcomes on the
syllabus.

2. Focus classroom activities, teaching and readings on meeting the intended


learning outcomes of the course.

3. Assess only what was taught, read, and practiced.

Methods of Collecting Assessment Information

Objective Assessment are those that require one and only one correct answer
and no other possible answers.

Example Tools:
 Supply type
 Matching type
 Alternative-response type
 Completion type
 Enumeration type
 Multiple choice type

Non-objective Assessment are evaluated by giving an opinion about the issue,


concept, ideas, and the like.

Example Tools:
 Restricted essay type
 Extended essay type
 Sentence completion
 Short responses
 Problem solving
 Oral questioning

Performance-Based Assessment which the objective of the lesson requires


that at the end of the lesson the students are required to perform in an
activity. There are two types of performance-based
assessment, such as process-oriented and product-oriented. And if it applies to
real world set-up, it is authentic.

Example Tools:
 Projects
 Journals
 Science fair
 Art show
 Exhibits
 Portfolio collection

Affective Assessment emphasizes a feeling tone, an emotion, and a degree of


acceptance or rejection.

Example Tools:
 Teacher observations
 Checklist/rating scale
 Inventories
 Self-report

Lecture 4: Principles of High-Quality Assessment (Reliability)

Reliability means that scores from an instrument are stable and consistent. It
refers to the consistency of scores obtained, how consistent they are for each
individual from one administration of a test to another and from one set of
items to another.

If the test is reliable, we would expect a student who receives a high score the
first time he takes the test to receive a high score the next time he takes the
test. The scores would probably not be identical, but they should be close.

Types of Reliability

The Test-Retest Reliability


 The test-retest reliability involves administering the same test twice to
the same group after a certain time interval has elapsed. A correlation
coefficient is then calculated to indicate relationship between the two
sets of scores obtained. Correlation coefficients will be affected by the
length of time that elapses between the two administrations of the test.

Equivalent Forms Reliability


 When the equivalent forms reliability is used, two different but
equivalent (Also called alternate or parallel) forms of test are
administered to the same group of individuals during the same time
period. Although the questions are different, they should sample the
same content and they should be constructed separately from each
other.

Equivalent-Forms and Test-Retest Reliability


 It is possible to combine equivalent-forms and test-retest reliability by
giving two different forms of the same test with a time interval between
the two administrations.

Internal Consistency by Kuder-Richardson Formula


Internal Consistency by Split-Half Reliability
 The split-half reliability involves scoring two halves (Usually odd items
versus even items) of a test separately for each person and then
calculating a correlation coefficient for the two sets of scores.

Factors Affecting Test Reliability:


 Test length
 Test content
 Item characteristics
 Score variability
 Group heterogeneity
 Student testwiseness
 Student motivation
 Time limits
 Cheating opportunities

Suggestions for Enhancing Reliability:


1. Use a sufficient number of items or tasks (Longer tests are more
reliable)
2. Use independent raters or observers who provide similar scores to the
same performances.
3. Make sure the assessment procedures and scoring are as objective as
possible.
4. Continue assessment until results are consistent.
5. Eliminate or reduce the influence of extraneous events or factors.
6. Use shorter assessments more frequently than fewer long
assessments.

Lecture 5: Validity

Validity is the most important idea to consider when preparing test. More than
anything else, the teachers want the information they obtained through the
use of a test to serve their purposes. In recent years, validity has been
defined as referring to the appropriateness, correctness, meaningfulness, and
usefulness of the specific inferences teachers make based on the data they
collect.

Validation is the process of collecting and analyzing evidence to support such


inferences.

Three Main Evidences of Validity

1. Content-Related Evidence of Validity


 Content-related evidence of validity refers to the content and format of
the instrument.
 One key element in content-related evidence of validity, then,
concerns the adequacy of the sampling.
 The other aspect of content validation has to do with the format of
instrument. This includes such things as the clarity of printing, font
size, adequacy of workspace, appropriateness of language, clarity of
directions, and so on.
 Another consideration related to content validity is instructional
validity, the extent to which an assessment is systematically sensitive
to the nature of instruction offered.

Structure of the Knowledge Dimension of the Revised Taxonomy

Factual Knowledge
 The basic elements that students must know to be acquainted with a
discipline or solve problems in it.
 Knowledge of terminology
 Knowledge of specific details and elements
Conceptual Knowledge
 The interrelationships among the basic elements within a larger
structure that enable them to function together.
 Knowledge of classifications and categories
 Knowledge of principles and generalizations
 Knowledge of theories, models, and structures

Procedural Knowledge
 How to do something, methods of injury, and criteria for using skills,
algorithms, techniques, and methods.
 Knowledge of subject-specific skills and algorithms
 Knowledge of subject-specific techniques and methods
 Knowledge of criteria for determining when to use appropriate
procedures

Metacognitive Knowledge
 Knowledge of cognition in general as well as awareness and knowledge
of one’s own cognition
 Strategic knowledge
 Knowledge about cognitive tasks, including appropriate contextual and
conditional knowledge.
 Self-knowledge

Structure of the Cognitive Process Dimension of the Revised Taxonomy

Remember
 Retrieving relevant knowledge from long-term memory.
Understand
 Determining the meaning of instructional messages, including oral,
written, and graphic communication.
Apply
 Carrying out or using a procedure in a given situation.
Analyze
 Breaking material into its constituent parts and detecting how the parts
relate to one another and to an overall structure or purpose.
Evaluate
 Making judgments based on criteria and standards.
Create
 Putting elements together to form a novel, coherent whole or make an
original product.
2. Criterion-Related Evidence of Validity
 Criterion-related evidence of validity refers to the relationship between
scores obtained using the test instrument and scores obtained using one
or more other test instruments (Often called as criterion)
 There are two forms of criterion related validity, predictive and
concurrent. To obtain evidence of predictive validity, teachers allow a
time interval to elapse between the administration of the instrument
and obtaining the criterion scores.

3. Construct-Related Evidence of Validity


 Construct-related evidence of validity refers to the nature of the
psychological construct or characteristic being measured by the
instrument. It is the broadest of the three categories of evidence for
validity that we are considering.
 There is no single piece of evidence that satisfies construct-related
validity.

Test Validity
 A measure is valid when it measures what it is supposed to measure. If
a quarterly exam is valid, then the contents should directly measure
the objectives of the curriculum. If a scale that measures personality is
composed of five factors, then the scores on the five factors should
have items that are highly correlated. If an entrance exam is valid, it
should predict students’ grades after the first semester.

Different Ways to Establish Test Validity

Content Validity
 When the items represent the domain being measured
 The items are compared with the objectives of the program. The items
need to measure directly the objectives (For achievement) or definition
(For scales). A reviewer conducts the checking.

Face Validity
 When the test is presented well, free of errors, and administered well.
 The test items and layout are reviewed and tried out on a small group
of respondents. A manual for administration can be made as a guide
for the test administrator.

Predictive Validity
 A measure should predict a future criterion. Example is an entrance
exam predicting the grades of the students after the first semester.
 A correlation coefficient is obtained where the X variable is used as the
predictor and the Y variable as the criterion.

Construct Validity
 The components or factor of the test should contain items that are
strongly correlated.
 The Pearson-R can be used to correlate the items for each factor.
However, there is a technique called factor analysis to determine which
items are highly correlated to form a factor.

Concurrent Validity
 When two or more measures are present for each examinee that
measure the same characteristic.
 The scores on the measures should be correlated.

A test is valid when:


1. It measures abilities, qualities, skills, and information it is supposed to
measure.
2. It is adapted to the intellectual level or maturity and background
experience of students.
3. Materials actually included are of prime importance.
4. There is wide sampling of items among essentials which the students
are expected to master as provided in the course of study.
5. It includes skills or abilities which are essential to success in a given
field.
6. The test items cover the materials found in textbooks and courses of
study in the field.
7. The test items are of the type found in the recommendations of
educational committees.
8. The test items are of social utility, reflecting actual life situations.
9. The test items are of the types that parallel good teaching.

Lecture 6: Practicality and Efficiency

Practicality and efficiency refer to the degree by which the measuring


instrument can be practically and efficiently used by teachers.

Here are some suggestions to follow:


 The teacher should be familiar with the assessment method.
 In evaluation, teachers should avoid the complexity of administration
procedures.
 Ease of interpretation
 Ease of scoring
 Evaluation should be finished in a specified time required.
 Reusability
 Cost

Fairness

Students should have the knowledge of learning outcomes and assessment.


 A fair assessment is one in which it is clear what will and will not be
tested. Both of the content of the assessment and the scoring criteria
should be public. Being public means that students know the content
and scoring criteria prior to the assessment and often prior to
instruction.
Students should be provided equal opportunity to learn.
 Opportunity to learn means that students know what to learn and then
are provided ample time and appropriate instruction.
 In the classroom, teachers envision to make all students learn. This
opportunity should be provided to all students regardless of race,
religious denomination, in particular, right before the assessment.

The students should have acquired prerequisite knowledge and skills.


 This means that teachers need to have a good understanding of needed
prerequisites.
 Teachers should deal with the issue of prior knowledge in teaching. This
is the rule of placement evaluation, knowing the students’ entry
behavior. Because of individual differences in learning, some students
may not have prior knowledge of the subject matter to be taught.

The teacher should avoid bias in their assessment tasks and procedures.
 Assessment tasks and procedures should be done systematically.
 There are two major forms of assessment in bias, offensiveness, and
unfair penalization.
 Offensiveness occurs if the content of the assessment offends, upsets,
distresses, angers, or otherwise creates negative affect for particular
students or a subgroup of students.
 Unfair penalization is bias that disadvantages a student because of
content that makes it more difficult for students from some groups to
perform as compared to students from other groups.
 Another type of assessment task bias that has received a lot of attention
recently is the need to accommodate the special abilities of exceptional
children, and the abilities of children with special needs.

Positive Consequences

The nature of classroom assessments has important consequences for teaching


and learning.

On students:
 Assessments affects the way students perceive the teacher and gives
them an indication of how much the teacher cares about them and
what they learn.
 Undeniably, it is a common classroom scenario that students display
phobia toward test.
 Students develop fear because evaluation is based on test alone. So,
students should be given the chance to evaluate their own learning.
Students are expected to positively express their feelings and thoughts
toward evaluation if they realize that, through this process, they are
given a chance to learn more. Their motivation is not anymore directed
toward grades, but toward further learning.

On teachers:
 If teachers fully internalize assessment and evaluation, they are given
opportunity to look inward or to introspect and examine their own
performance. The result of the evaluation can provide information
about the area of instruction which the teacher needs to improve on.
This is an opportunity for continuous professional development.

Ethics

Although assessment is thought of as a technical activity, there are ethical


concerns associated with assessment process. Teacher must be aware of their
ethical responsibilities because the teacher’s decisions greatly influence
pupil’s self-perception on their performance. The bias can result to unethical
gathering, recording, and reporting data.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy