El fenix_Edu_AL
El fenix_Edu_AL
El fenix_Edu_AL
Assessment
of Learning
El Fénix Series
This book is free and open-source
If you come across wrong facts or calculations or any other kind of error(s)in this book, please mail
the same to latexbook12@gmail.com alongwith book (subject) name and page number. Your co-
operation is highly appreciated.
El Fénix Series is an Initiative from group of students of Regional Institute of Education Mysore.
Our team is working on lot of projects to help the student community of RIE Mysore. We want you
to make proper use of the Educational Materials we prepare, and help us to help you.
Take care friends.
Team El Fénix
K Shania Kariappa (Book Incharge, Assessment of Learning)
Karthik V Pai (Founder, El Fénix)
Jayaprakash H M (Founder, El Fénix)
Vaishnav Sankar K (Founder, El Fénix)
Rohit Raj (Editor)
Ritik Roshan Mohanty (Editor)
Kirthik R (Editor)
Jyotirmayee Swain (Proof Reader)
Contents
1
1.10 Recent trends in Assessment and Evaluations . . . . . . . . . . . . . . . . . . . . . . 14
1.10.1 Assessment for learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.10.2 Assessment as learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.10.3 Assessment of learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.10.4 Relationship with Formative and Summative . . . . . . . . . . . . . . . . . . . 15
1.10.5 Authentic Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.11 Achievement surveys – [State & National] . . . . . . . . . . . . . . . . . . . . . . . . 16
1.11.1 Online Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.11.2 On demand assessment/ evaluation . . . . . . . . . . . . . . . . . . . . . . . . 18
1.11.3 Focus on Assessment and Evaluation in various Educational Commissions and
NCFs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2
1.1.6 Meaning of Evaluation
Evaluation is a broader term that refers to all of the methods used to find out what happens as
a result of using a specific intervention or practice. Evaluation is the systematic assessment of the
worth or merit of some object. It is the systematic acquisition and assessment of information to
provide useful feedback about some object.
A pre-test or needs assessment informs instructors what students know and do not know at the
outset, setting the direction of a course. If done well, the information garnered will highlight the gap
between existing knowledge and a desired outcome. Accomplished instructors find out what students
already know, and use the prior knowledge as a stepping off place to develop new understanding.
The same is true for data obtained through assessment done during instruction. By checking in with
students throughout instruction, outstanding instructors constantly revise and refine their teaching
to meet the diverse needs of students.
What and how students learn depends to a major extent on how they think they will be assessed.
Assessment practices must send the right signals to students about what to study, how to study, and
the relative time to spend on concepts and skills in a course. Accomplished faculty communicate
clearly what students need to know and be able to do, both through a clearly articulated syllabus,
and by choosing assessments carefully in order to direct student energies. High expectations for
learning result in students who rise to the occasion
Effective assessment provides students with a sense of what they know and don’t know about a
subject. If done well, the feedback provided to students will indicate to them how to improve their
performance. Assessments must clearly match the content, the nature of thinking, and the skills
taught in a class. Through feedback from instructors, students become aware of their strengths and
challenges with respect to course learning outcomes. Assessment done well should not be a surprise
to students.
3
Assessment informs teaching practice
Reflection on student accomplishments offers instructors insights on the effectiveness of their teaching
strategies. By systematically gathering, analysing, and interpreting evidence we can determine how
well student learning matches our outcomes / expectations for a lesson, unit or course. The knowledge
from feedback indicates to the instructor how to improve instruction, where to strengthen teaching,
and what areas are well understood and therefore may be cut back in future courses.
Through careful analysis it is possible to determine the challenges and weaknesses of instruction
in order to support student learning better. Some topics or concepts are notoriously difficult, and
there may be a better approach to use. Perhaps a model, simulation, experiment, example or illus-
tration will clarify the concept for students. Perhaps spending a bit more time, or going over a topic
in another way will make a difference. If the problem is noticed late in the course, an instructor may
plan to make any instructional changes for the next time the course is taught, but it is helpful to
make a note of the changes needed at the time so that the realization is not lost.
4
1.3 Importance of assessment & evaluation for quality edu-
cation – as a tool in Pedagogic decision making
Performance in schools is increasingly judged on the basis of effective learning outcomes. Information
is critical to knowing whether the school system is delivering good performance and to providing
feedback for improvement in student outcomes. Countries use a range of techniques for the evaluation
and assessment of students, teachers, schools and education systems. Many countries test samples
and/or all students at key points, and sometimes follow students over time.
1. Student Assessment : Several common policy challenges arise concerning student assess-
ment: aligning educational standards and student assessment; balancing external assessments
and teacher-based assessments in the assessment of learning and integrating student formative
assessment in the evaluation and assessment framework.
2. Teacher Evaluation : Common policy challenges in teacher evaluation are: combining the
improvement and accountability functions of teacher evaluation; accounting for student results
in evaluation of teachers; and using teacher evaluation results to shape incentives for teachers.
School Evaluation : School evaluation presents common policy challenges concerning align-
ing external evaluation of schools with internal school evaluation; providing balanced public
reporting on schools and improving data handling skills of school agents.
3. System Evaluation : Common policy challenges for evaluation of education systems are :
meeting information needs at system level; monitoring key outcomes of the education system;
and maximising the use of system-level information.
2. Basic conditions and needs for the development : How it can be enhanced and ensured.
3. Period : How long the development will take until the defined goals can be reached.
4. Limits of developmental possibilities, either referring to the defined goals (selection assessment),
or generally, with a realistic time frame of 3 to 5 years.
5. Quality assurance and sustainability : How the results can be monitored and ensured in
the long term.
5
The prognostic assessment is suitable for all management levels including executive board and admin-
istrative council, but likewise for young people with the aim of a comprehensive potential analysis.
Typically, the prognostic assessment is accomplished as an individual one-day-assessment. The ob-
jectives are defined individually.
Formative evaluation helps a teacher to ascertain the pupil progress from time to time. At the
end of a topic or unit or segment or a chapter the teacher can evaluate the learning outcomes basing
on which he can modify his methods, techniques and devices of teaching to provide better learning
experiences. Formative evaluation also provides feedback to pupils. The pupil knows his learning
progress from time to time. Thus, formative evaluation motivates the pupils for better learning. As
such, it helps the teacher to take appropriate remedial measures.
It is concerned with the process of development of learning. In the sense, evaluation is concerned
not only with the appraisal of the achievement but also with its improvement. Formative evaluation
is generally concerned with the internal agent of evaluation, like participation of the learner in the
learning process.
2. Placement : Placement is concerned with the finding out the position of an individual in the
curriculum from which he has to start learning.
3. Monitoring : Monitoring is concerned with keeping track of the day-to- day progress of the
learners and to point out changes necessary in the methods of teaching, instructional strategies,
etc.
6
7. Its results cannot be used for grading or placement purposes.
Examples
1. Monthly tests.
2. Class tests.
3. Periodical assessment.
The approaches of summative evaluation imply some sort of final comparison of one item or cri-
teria against another. It has the danger of making negative effects. This evaluation may brand a
student as a failed candidate, and thus causes frustration and setback in the learning process of the
candidate, which is an example of the negative effect. The traditional examinations are generally
summative evaluation tools. Tests for formative evaluation are given at regular and frequent intervals
during a course; whereas tests for summative evaluation are given at the end of a course or at the
end of a fairly long period (say, a semester).
2. Certifying : Certifying is concerned with giving evidence that the learner is able to perform
a job according to the previously determined standards.
7
3. Promoting : It is concerned with promoting pupils to next higher class.
4. Selecting : Selecting the pupils for different courses after completion of a particular course
structure.
Examples
1. Traditional school and university examination
2. Teacher-made tests
3. Standardised tests
4. Practical and oral tests
5. Rating scales
1.4.5 Placement
Placement evaluation is designed to place the right person in the right place. It ensures the entry
performance of the pupil. The future success of the instructional process depends on the success
of placement evaluation. Placement evaluation aims at evaluating the pupil’s entry behaviour in a
sequence of instruction. In other words, the main goal of such evaluation is to determine the level
or position of the child in the instructional sequence. We have a planned scheme of instruction for
classroom which is supposed to bring a change in pupil’s behaviour in an orderly manner. Then we
prepare or place the students for planned instruction for their better prospects.
8
Examples
1. Aptitude test
2. Self-reporting inventories
3. Observational techniques
4. Medical entrance exam.
5. Engineering or Agriculture entrance exam.
The measurement is made in terms of a class or any other norm group. Almost all our class-
room tests, public examinations and standardised tests are norm-referenced as they are interpreted
in terms of a particular class and judgements are formed with reference to the class.
Examples
1. Raman stood first in Mathematics test in his class.
2. The typist who types 60 words per minute stands above 90 percent of the typists who appeared
the interview.
3. Amit surpasses 65% of students of his class in reading test.
In the above examples, the person’s performance is compared to others of their group and the
relative standing position of the person in his/her group is mentioned. We compare an individual’s
performance with similar information about the performance of others.
That is why selection decisions always depend on norm- referenced judgements. A major requirement
of norm-referenced judgements is that individuals being measured and individuals forming the group
or norm, are alike. In norm-referenced tests very easy and very difficult items are discarded and
items of medium difficulty are preferred because our aim is to study relative achievement.
9
Examples
1. Raman got 93 marks in a test of Mathematics.
In the above examples there is no reference to the performance of other members of the group.
Thus criterion-referenced evaluation determines an individual’s status with reference to well defined
criterion behaviour.
It is an attempt to interpret test results in terms of clearly defined learning outcomes which serve
as referents (criteria). Success of criterion-reference test lies in the delineation of all defined levels of
achievement which are usually specified in terms of behaviourally stated instructional objectives.
Hively and Millman (1974) suggested a new term, ‘domain-referenced test’ and to them the word
‘domain’ has a wider connotation. A criterion referenced test can measure one or more assessment
domain.
The standardized test is developed with the help of professional writers, reviewers and editors of
tests items whereas the teacher made test usually relies upon the skill of one or two teachers. The
standardized test provides norms for various groups that are broadly representative of performance
throughout the country whereas the teacher made test lack this external point of reference.
10
composes a response to a prompt.
In most cases, the prompt consists of printed materials (a brief question, a collection of histori-
cal documents, graphic or tabular material, or a combination of these). However, it may also be an
object, an event, or an experience. Student responses are usually produced —on demand i.e., the
respondent does the writing at a specified time and within a fixed amount of time. These constraints
contribute to standardization of testing conditions, which increases the comparability of results across
students or groups.
1.7.3 Self-Assessment
Once learners are able to use the assessment criteria appropriately and can actively contribute to
peer-assessment activities, the next step is to engage them in self-assessment tasks. Self-assessment
is a very powerful teaching tool and crucial to the Assessment for Learning process.
s. Once learners can engage in peer assessment activities, they will be able to apply these new
skills to undertaking ‘objective’ assessment of their own work. We all know it is easy to find fault in
other people’s work, but it is a far more challenging process to judge one’s own work. Once learners
can assess their own work and their current knowledge base, they will be able to identify the gap in
their own learning; this will aid learning and promote progress and contribute to the self-management
of learning.
Teachers need :
2. ensure they provide individuals with the necessary support so that they are able to acknowledge
shortcomings in their own work
3. support learners through the self-assessment process so that strengths in their work are fully
recognized and weaknesses are not exaggerated to the point that they damage learners’ self-
esteem.
11
. When learners are able to understand the assessment criteria, progress is often maximized, es-
pecially when individuals have opportunities to apply the assessment criteria to work produced by
their peers as part of planned classroom activities. Peer assessment using the predefined assessment
criteria is the next stage to evaluate learner understanding and consolidating learning.
1. learners clarifying their own ideas and understanding of the learning intention
The assessment may be on the final product or understanding, or on the process of developing that
product or understanding. Whilst the benefits of group work are well documented, the challenges of
allocating marks and feedback to individuals within that group can be a challenge.
2. Observations, which may either involve counting the number of times that a particular phe-
nomenon occurs, such as how often a particular word is used in interviews, or coding observa-
tional data to translate it into numbers
12
Although qualitative data is much more general than quantitative, there are still a number of common
techniques for gathering it. These include :
3. ‘Postcards’, or small-scale written questionnaires that ask, for example, three or four focused
questions of participants but allow them space to write in their own words;
4. Secondary data, including diaries, written accounts of past events, and company reports;
5. Observations, which may be on site, or under laboratory conditions’, for example, where par-
ticipants are asked to role-play a situation to show what they might do.
The term ‘continuous’ is meant to emphasise that evaluation of identified aspects of students’ ‘growth
and development’ is a continuous process rather than an event, built into the total teaching-learning
process and spread over the entire span of academic session. It means regularity of assessment, fre-
quency of unit testing, diagnosis of learning gaps, use of corrective measures, retesting and for their
self-evaluation.
The second term ‘comprehensive’ means that the scheme attempts to cover both the scholastic
and the co-scholastic aspects of students’ growth and development. Scholastic aspects include curric-
ular areas or subject specific areas, whereas co-scholastic aspects include Life Skills, Co-Curricular,
attitudes, and values.
The scheme is thus a curricular initiative, attempting to shift emphasis from testing to holistic
learning. It aims at creating good citizens possessing sound health, appropriate skills and desir-
able qualities besides academic excellence. It is hoped that this will equip the learners to meet the
challenges of life with confidence and success.
• Assessment embedded in the teaching and learning process within the broader educational
philosophy of ‘assessment for learning’.
13
3. Child-centered and activity based pedagogy
4. Focus on (learning-outcome based) competency development rather than content memorisation
5. Broadening the scope of assessment by way of including self-assessment, peer-assessment besides
teacher assessment
6. Non-threatening, stress free and enhanced participation/ interaction
7. Focus on assessment of/and/as learning rather than evaluation of achievement
8. Reposing faith on teacher and the system
9. Enhancing self confidence in children
This system of assessment involves awarding grades to students to reflect the level of performance (or
standard) they have achieved relative to the pre-defined standards. Students’ grades, therefore, are
not determined in relation to the performance of others, or to a pre-determined distribution of grades.
Standards-based assessment lets students know against which criteria you will judge their work,
and the standards attached to each of these criteria. It tells students what performance is required
and allows you to make comparisons between students based on their achievement of the standards.
Standards should be clear, straight-forward, observable, measurable, and wellarticulated. The stan-
dards guide in creating experiences to enable our students to know how, when and why to say what
to whom.
Types of standards
1. Content standard: are statements about what learners should know and be able to do.
2. Performance standard: shows us how the learners achieve the standards targeted. They refer
to how learners are meeting a standard and show the learner’s progress towards meeting a
standard.
3. Proficiency standard: these standards tell us how well learners should perform.
14
1.10.3 Assessment of learning
Where assessment informs students, teachers and parents, as well as the broader educational com-
munity, of achievement at a certain point in time in order to celebrate success, plan interventions
and support continued progress.
Assessment must be planned with its purpose in mind. Assessment for, as and of learning all have a
role to play in supporting and improving student learning, and must be appropriately balanced. The
most important part of assessment is the interpretation and use of the information that is gleaned
for its intended purpose. Assessment is embedded in the learning process.
It is tightly interconnected with curriculum and instruction. As teachers and students work towards
the achievement of curriculum outcomes, assessment plays a constant role in informing instruction,
guiding the student’s next steps, and checking progress and achievement. Teachers use many dif-
ferent processes and strategies for classroom assessment, and adapt them to suit the assessment
purpose and needs of individual students. Research and experience show that student learning is
best supported when
3. Students are involved in the learning process (they understand the learning goal and the cri-
teria for quality work, receive and use descriptive feedback, and take steps to adjust their
performance)
5. Parents are well informed about their child’s learning, and work with the school to help plan
and provide support
6. Students, families, and the general public have confidence in the system
1. The formative assessment is to develop not for judgments in nature as the summative assessment
judges the merit of instructional sequences.
2. Formative assessment is the assessment made during the instructional phase about progress in
learning but the summative assessment is the terminal assessment of performance at the end
of instruction.
3. Formative assessment the scores of individual pattern of pass-fail whereas in summative assess-
ment report is given in terms of total scores.
4. Formative assessment content focus is detailed and it is narrow and in summative assessment
content is general and broad.
15
1.10.5 Authentic Assessment
Authentic assessment (AA) springs from the following reasoning and practice :
3. Therefore, schools must help students become proficient at performing the tasks they will
encounter when they graduate.
4. To determine if it is successful, the school must then ask students to perform meaningful tasks
that replicate real world challenges to see if students are capable of doing so.
Thus, in AA, assessment drives the curriculum. That is, teachers first determine the tasks that stu-
dents will perform to demonstrate their mastery, and then a curriculum is developed that will enable
students to perform those tasks well, which would include the acquisition of essential knowledge and
skills. This has been referred to as planning backwards.
If I were a golf instructor and I taught the skills required to perform well, I would not assess my
students’ performance by giving them a multiple choice test. I would put them out on the golf
course and ask them to perform. Although this is obvious with athletic skills, it is also true for
academic subjects. We can teach students how to do math, do history and do science, not just know
them. Then, to assess what our students had learned, we can ask students to perform tasks that
”replicate the challenges” faced by those using mathematics, doing history or conducting scientific
investigation.
Traditional Authentic
Selecting a Response Performing a Task
Contrived Real-life
Recall/Recognition Construction/Application
Teacher-structured Student-structured
Indirect Evidence Direct Evidence
In 2000, NCERT’s NAS programme was incorporated into the SSA programme. The plan was
to carry out three NAS cycles, each cycle covering three key grades :
1. Class III
2. Class V
3. Class VII/VIII
16
In Class V, students are also tested in environmental studies (EVS), while Class VII/VIII com-
pletes tests in science and social science. The Baseline Achievement Survey (BAS) was carried out
in 2001- 2004, followed by the Midterm Achievement Survey (MAS) in 2005-2008. The experience
gained through these initial two cycles made the value of the NAS clear, and the surveys were made
an ongoing feature of the national education system. To mark this shift from stand-alone surveys to
continuous assessment, the Terminal Achievement Survey (TAS) has been renamed ‘Cycle 3’.
Measuring progress in education the NAS is a useful tool for teachers and policymakers alike –
to establish what students are achieving in core subjects and to identify any areas of concern. By
repeating the NAS at regular intervals, the data can be used to measure trends in education achieve-
ment levels and measure progress made by SSA and other education reforms.
Instant and detailed feedback, as well as flexibility of location and time, is just two of the many
benefits associated with online assessments. There are many resources available that provide online
assessments, some free of charge and others that charge fees or require a membership. The online ex-
amination system not only reflects the justification and objectivity of examination, but also releases
the workload of teachers, which is accepted by more and more schools, certification organizations and
training organizations. Most online examination systems only support several fixed question types
and don’t allow users define their own question types, so they have pool scalability.
This paper proposes a new online examination system, which not only provides several basic question
types but also allows users to define their new question types (user-defined question type) through
composing of basic question types and/or user-defined question types, which is realized based on the
object-oriented conception and composite design pattern. The new online examination system over-
comes the shortcoming of old online examination systems and has better extensibility and flexibility.
1. Independent Work : Independent work is work that a student prepares to assist the instruc-
tor in determining their learning progress. Some examples are: exercises, papers, portfolios,
and exams (multiple choice, true false, short answer, fill in the blank, open ended/essay or
matching). To truly evaluate, an instructor must use multiple methods.
2. Group Work : Students are often asked to work in groups. With this brings on new exami-
nation strategies. Students can be evaluated using a collaborative learning model in which the
learning is driven by the students and/or a cooperative learning model where tasks are assigned
and the instructor is involved in decisions.
17
1.11.2 On demand assessment/ evaluation
The scheme of on-demand examination is a comprehensive ICT enabled system of examination
which provides the learners an opportunity to appear in the examination as per their preparation
and convenience. In fact, it is a blended scheme of ICT and traditional examination system wherein
students can walk-in any time at the selected examination centres and take examination. The de-
mand of flexibility in education system and changing profile of learners has necessitated starting such
innovative scheme which has made the existing examination system more flexible and learner-friendly
This is very much successful in distance education system as most of the distance learners in higher
education are working people; they normally do not get leave from their organizations for several
days at a stretch for term end examination, and hence they fail to complete their courses in stipulated
time limit. Most on-demand examination is conducted through ICT. Its objective is to enable the
learners to appear in the examination as per their preparation and convenience on the date and time
of their choice.
8. It reduces workload of students, teachers and also the entire system of examination.
On-demand examination makes use of ICT to solve problems which arise due to human limitations.
1. It makes possible instant generation of parallel question papers, and facilitates authorised data
entry at different points, leaving no chance for human error.
2. It has very silently reformed the system of evaluation without making abrupt changes.
3. It is not only simple and user friendly but it is also cost effective and saves time and effort in
setting question papers.
4. It generates individualised and unique question papers on the day of examination by picking
up the questions randomly from the question bank as per the blueprint & design.
5. It removes frustration, loss of self-esteem, and depression that are generally characterized by
the term-end examination
18
1.11.3 Focus on Assessment and Evaluation in various Educational Com-
missions and NCFs
Examinations are an indispensable part of the educational process as some form of assessment is
necessary to determine the effectiveness of teaching learning processes and their internalization by
learners. Various Commissions and Committees have felt the need for examination reforms. The
Hunter Commission (1882), Calcutta University Commission or Sadler Commission (1917-1919),
Hartog Committee Report (1929), the Report of Central Advisory Board / Sargeant Plan (1944),
Secondary Education Commission / Mudaliar Commission (1952-53) have all made recommendations
regarding reducing emphasis on external examination and encouraging internal assessment through
Continuous and Comprehensive Evaluation.
This aspect has been strongly taken care of in the National Policy on Education- 1986 which states
that “Continuous and Comprehensive Evaluation that incorporates both scholastic and non-scholastic
aspects of evaluation, spread over the total span of instructional time”.
19
Developing Assessment Tools, Techniques & Strategies – I
2
Contents
2.1 Domains of Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.1.1 Cognitive Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.1.2 Affective Attitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.1.3 Psycho-motor Skills . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.1.4 Relationship between Educational objectives, Learning experiences and Eval-
uation: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.2 Revised taxonomy of objectives [2001] and its implications for assessment and stating
the objectives: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2.1 Knowledge dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2.2 Cognitive domain (knowledge-based) . . . . . . . . . . . . . . . . . . . . . . . 24
2.2.3 Stating objectives as learning outcomes - General & Specific: . . . . . . . . . . 27
2.2.4 Construction of Achievement tests – steps, procedure and uses: . . . . . . . . . 27
2.2.5 TYPE OF TEST ITEMS: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.2.6 Construction of Diagnostic test – steps, uses and limitation: . . . . . . . . . . 32
2.3 Remedial Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.3.1 Need . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.3.2 Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.3.3 Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.4 Quality assurance in tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.4.1 Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.4.2 Validity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
When these learning domain ideas are applied to learning environments, active verbs are used to
describe the kind of knowledge and intellectual engagement we want our students to demonstrate.
20
2.1.1 Cognitive Knowledge
The Cognitive Domain develops six areas of intellectual skills that build sequentially from simple to
complex behaviours. Bloom arranged them this way :
In time, this arrangement evolved into what we now call Bloom’s Revised Taxonomy. Category
names were changed from nouns to verbs, but are still ordered from simple to complex :
1. Remembering
2. Understanding
3. Applying
4. Analysing
5. Evaluating
6. Creating
21
5. Complex Overt Response (performing with advanced skill)
Learning part includes what methodology and tools students use to learn the topic. They may be
asked to read a book or watch a video and come to class before teaching the first class on the topic
(flipped classroom). They may be asked to prepare for a seminar. One may be focusing on theory,
another in experimental techniques another may be dealing with applications. They may be asked
to prepare for a group discussion or prepare a study report.
Experience part is the most crucial part of the entire educational mechanism. How do you plan
to give an experiential learning to the students? You may ask them to read a book and write the
gist. Ask them to watch a video and write a report. Ask them to develop a model, ask them to play
around an interactive animation and write their observations, take them to an industry or research
lab and ask them to write their experiences.
Without experience there won’t be any learning. Students will forget things very easily. Somehow
we need an active (not passive) participation of students in the learning process.
Preparing students just to remember things is the first stage of learning and there are 5 more stages
without which learning will not be complete. That is what described in Bloom’s taxonomy modified
version.
Evaluation part come when you want to rate the learning level of students.
Are you trying to test the remembering level of students, or understanding, applying, analysing,evaluating,
creating?
Understanding part tests “why” something is presented in such a manner in text book.
Application part tests the application of equations derived or concepts developed, blindly to some
problems and get results.
Analysis part cross checks the validity of equations or concepts developed. What if this is not
there what if it is in an other way around etc.
So understanding part deals with where a particular equation or concept works and analysis part
deals with where it fails.
Then comes the evaluation part where student will be given a situation and apply the concepts
learned to predict the outcome. They should be able to give reasons why something works or why
something fails in the given situation. So they have to cross both understanding and analysis stages
to judge a case or a system or a situation.
Then comes the creation part where student will be asked to create something new that answers
22
the problems identified in analysis level. It is not just doing something new. Then the results may
be presented as a study report or a project report.
23
2.2.2 Cognitive domain (knowledge-based)
In the 1956 original version of the taxonomy, the cognitive domain is broken into the six levels
of objectives listed below.In the 2001 revised edition of Bloom’s taxonomy, the levels have slightly
different names and the order is revised: Remember, Understand, Apply, Analyze, Evaluate, and
Create (rather than Synthesize).
Knowledge
Knowledge involves recognizing or remembering facts, terms, basic concepts, or answers without
necessarily understanding what they mean. Its characteristics may include:
• Knowledge of ways and means of dealing with specifics—conventions, trends and sequences,classifications
and categories.
Comprehension
Comprehension involves demonstrating an understanding of facts and ideas by organizing,summarizing,
translating, generalizing, giving descriptions, and stating the main ideas.
Application
Application involves using acquired knowledge—solving problems in new situations by applying ac-
quired knowledge, facts, techniques and rules. Learners should be able to use prior knowledge to
solve problems, identify connections and relationships and how they apply in new situations.
Analysis
Analysis involves examining and breaking information into component parts, determining how the
parts relate to one another, identifying motives or causes, making inferences, and finding evidence
to support generalizations. Its characteristics include
• Analysis of elements
• Analysis of relationships
• Analysis of organization
Synthesis
Synthesis involves building a structure or pattern from diverse elements; it also refers to the act of
putting parts together to form a whole. Its characteristics include:
24
Evaluation
Evaluation involves presenting and defending opinions by making judgments about information,the
validity of ideas, or quality of work based on a set of criteria. Its characteristics include:
Receiving
The lowest level; the student passively pays attention. Without this level, no learning can occur.
Receiving is about the student’s memory and recognition as well.
Responding
The student actively participates in the learning process, not only attends to a stimulus; the student
also reacts in some way.
Valuing
The student attaches a value to an object, phenomenon, or piece of information. The student
associates a value or some values to the knowledge they acquired.
Organizing
The student can put together different values, information, and ideas, and can accommodate them
within his/her own schema; the student is comparing, relating and elaborating on what has been
learned.
Characterizing
The student at this level tries to build abstract knowledge.
Perception
The ability to use sensory cues to guide motor activity: This ranges from sensory stimulation,through
cue selection, to translation.
25
Key words:
chooses, describes, detects, differentiates, distinguishes, identifies, isolates, relates,selects.
Set
Readiness to act: It includes mental, physical, and emotional sets. These three sets are dispositions
that predetermine a person’s response to different situations (sometimes called mindsets). This
subdivision of psychomotor is closely related with the ”responding to phenomena” subdivision of the
affective domain.
Keywords:
begins, displays, explains, moves, proceeds, reacts, shows, states, volunteers.
Guided response
The early stages of learning a complex skill that includes imitation and trial and error: Adequacy of
performance is achieved by practicing.
Keywords:
copies, traces, follows, reacts, reproduces, responds.
Mechanism
The intermediate stage in learning a complex skill: Learned responses have become habitual and the
movements can be performed with some confidence and proficiency.
Key words:
assembles, calibrates,constructs, dismantles, displays, fastens, fixes,grinds, heats,manipulates, mea-
sures, mends, mixes, organizes, sketches.
Key words:
assembles, builds, calibrates, constructs, dismantles, displays, fastens, fixes, grinds, heats, manip-
ulates, measures, mends, mixes, organizes, sketches. (Note: The key words are the same as in
mechanism, but will have adverbs or adjectives that indicate that the performance is quicker, better,
more accurate, etc.)
Adaptation
Skills are well developed and the individual can modify movement patterns to fit special requirements.
26
Key words:
adapts, alters, changes, rearranges, reorganizes, revises, varies.
Origination
Creating new movement patterns to fit a particular situation or specific problem: Learning outcomes
emphasize creativity based upon highly developed skills.
Key words:
arranges, builds, combines, composes, constructs, creates, designs, initiates, makes, originates.
Specific objectives:
There are usually several, since each segment of an organization or each chapter of an investigation
has its own goal to be achieved, which is under-edited or contained in the general objective.
Thus, the sum of all the specific objectives would have to meet the general objective as
a result, since the latter include the steps that must be taken first (and often in succession or in an
organized way) to reach the top of the ladder.
• Planning of test
27
• Preparation of the blue print
• Writing of items
1. Planning of test:
3. Weightage to objectives: This indicates what objectives are to be tested and what weigh-
tage has to be given to each objective.
4. Weightage to content: This indicates the various aspects of the content to be tested and
the weightage to be given to these different aspects.
5. Weightage to form of questions: This indicates the form of the questions to be included
in the test and the weightage to be given for each form of questions.
28
6. Weightage to difficulty level: This indicates the total mark and weightage to be given to
different level of questions.
7. Preparation of the blue print: Blue print is a three-dimensional chart giving the placement
of the objectives, content and form of questions.
8. Writing of items:
• It should also check whether all the questions included can be answered within the time
allotted.
• In the case of short answer and essay type questions, the marking scheme is prepared.
• In preparing marking scheme, the examiner has to list out the value points to be credited
and fix up the mark to be given to each value point.
29
Marking Scheme:
Q.No Value points Marks Total Marks
Value point-1 1/2
Value point-2 1/2
1 2
Value point-3 1/2
Value point-4 1/2
Value point-1 1/2
Value point-2 1/2
2 2
Value point-3 1/2
Value point-4 1/2
Question-wise Analysis
Estimated Time
Q.No Content Objectives Forms of Questions Difficulty Level Marks
(mins)
1 Sub-topic-1 Knowledge Objective Type Easy 1/2 1
2 Sub-topic-2 Understanding Objective Type Average 1/2 1
3 Sub-topic-2 Application Objective Type Easy 1/2 1
4 Sub-topic-1 Knowledge Objective Type Easy 1/2 1
5 Sub-topic-2 Understanding Objective Type Average 1/2 1
6 Sub-topic-1 Analysis Objective Type Average 1/2 1
7 Sub-topic-1 Synthesis Short Answer Difficult 2 3
8 Sub-topic-2 Application Short Answer Easy 2 3
9 Sub-topic-1 Analysis Essay Average 4 10
1. Supply type (Recall Type) The respondent has to supply the responses.
2. Selection type (Recognition Type) The respondent has to select the responses from among the
given responses.
• A large amount of study material can be tested in a very short period time
• Economy of time.
• Objectivity of scoring.
• No bluffing
30
• It reduces the subjective element of the examiner to the minimum.
• If carefully planned, it can measure the higher mental process of understanding, applica-
tion, analysis, prediction and interpretation.
Essay type:
• It is free response test item.
• Easy to prepare.
• Useful in measuring certain abilities and skills.
• Permit the examinee to write down comprehensively what he knows about something.
• Promote originality and creative thinking.
31
• Possibility of guess work can be eliminated.
• Reduce chance on the spot copying.
• Low printing cost.
• Minimum validity.
• Lack of reliability.
• No objectivity.
• Rote memory is encouraged.
• It is a time consuming test item.
2. It is a means by which an individual profile is examined and compared against certain norms
or criteria.
3. It focuses on individual’s educational weakness or learning deficiency and identify the gaps in
pupils.
6. It is corrective in nature.
7. It pinpoints the specific types of error each pupil is making and searches for underlying causes
of the problem.
9. It helps us to identify the trouble spots and discovered those areas of students’ weakness that
are unresolved by formative test.
(a) Observation.
(b) Analysis of oral responses.
(c) Written class work.
(d) Analysis of student’s assignments and test performance.
32
(e) Analysis of cumulative and anecdotal records.
3. Determining the Factors/Reasons or Causes Causing the learning Difficulty (Data Collection):
(a) Retardation in basic skills.
(b) Scholastic aptitude factors.
(c) Physical Mental and Emotional (Personal) Factors).
(d) Indifferent attitude and environment.
(e) Improper teaching methods, unsuitable curriculum, complex course materials.
4. Remedial measures/treatment to rectify the difficulties:
(a) Providing face to face interaction.
(b) Providing as may simple examples.
(c) Giving concrete experiences, use of teaching aids.
(d) Promoting active involvement of the students.
(e) Consultation of Doctors/Psychologists/Counsellors.
(f) Developing strong motivation.
5. Prevention of Recurrence of the Difficulties:
(a) Planning for non-recurrence of the errors in the process of learning.
The Unit on which a Diagnostic Test is based should be broken into learning points
without omitting any of the item and various types of items of test is to be prepared in
a proper sequence:
1. Analysis of the context minutely i.e., major and minor one.
2. Forming questions on each minor concept (recall and recognition type) in order of difficulty.
3. Review the test items by the experts/experienced teacher to modify or delete test items if
necessary.
4. Administering the test.
5. Scoring the test and analysis of the results.
6. Identification of weakness
7. Identify the causes of weakness (such as defective hearing or vision, poor home conditions,
unsatisfactory relations with classmates or teacher, lack of ability) by the help of interview,
questionnaires, peer information, family, class teacher, doctor or past records.
8. Suggest remedial programme (No set pattern).
Motivation, re-teaching, token economy, giving reinforcement, correct emotion, changing sec-
tion, giving living examples, moral preaching’s.
33
Elements of Diagnostic Tests:
3. Time Scheduling.
4. Sequencing of Study.
7. Costs.
A student that might, for example, have a low reading level might be given remediation on a one on
one basis, phonic instruction and practice reading text aloud.
2.3.1 Need
It aims to cater for individual differences, help students who lag behind, develop interpretation skills
and help students in critical thinking skills in the learning of map work.
2.3.2 Types
Small Group Tutoring
Remedial courses often send ‘remedial students’ off into small groups to support students who are
falling behind. Often, schools bring in specialists who peel off students into small groups to focus on
34
specific interventions.
Similarly, a common teaching strategy is to allow higher achieving students to work in groups alone.
This gives time for the teacher to spend focused time with a small group of students who need
additional support.
One-To-One Tutoring
One-to-one tutoring has either a trained specialist, the classroom teacher, or a volunteer spend
individual time with a student. While it is an effective way of supporting students, it is resource
intensive. It is often hard to find enough time and staff to have one-to-one interventions while also
supporting the rest of the class. Some parents opt for paid private one-to-one tutoring to address
this shortfall.
Private Tutoring
Private tutoring is one of the most popular formats for remedial support. Parents who have the
funds to send their children to after-school tutoring may use this as an option to help ensure their
students keep up with their peers.
Specialist Tutoring
Trained specialists, such as in the reading recovery program, can provide research-based systematic
programs of support to help students reach benchmarks. Often, schools employ trained specialists
to come into classrooms and take one-to-one or small-group sessions with students in need.
Peer Tutoring
Peer tutoring involves one student helping another student on their work. This may take the form
of older students coming into the classroom to help younger students. Or, it may be getting more
advanced students in the same class to pair up with less advanced students to help them learn.
Volunteer Tutoring
Schools often rely on volunteer tutors to help provide additional support to remedial students. This
may take the form of ‘parent helpers’ who come into the classroom to help the teacher and get to
know the class better. A challenge of volunteer tutoring is providing sufficient training and support
for the volunteers so they can effectively help students.
Withdrawal System
A withdrawal system involves removing students entirely from a mainstream classroom for a short
(one lesson) or long (indefinitely) time to give tailored support.
The challenge of withdrawal systems is that it might stigmatize students and exclude them from
participation in mainstream activities. Exclusion based on special needs is highly discouraged by
contemporary education scholars.
35
• Pause and rewind possibilities
• Accessibility for rural and remote students
However, there are some challenges of CAIs such as:
– Potential lack of synchronous teacher-student interaction
– Cost of use of technologies and internet
2.3.3 Strategies
1. Teachers should modify the curriculum to suit students’ learning styles and abilities.
2. To gain expertise, the teacher should set some simple teaching objectives.
3. Textbooks should not be used to guide teaching and should not be considered the school
curriculum.
4. Teachers should be encouraged to follow cross-curricular teaching guidelines by flexibly con-
necting similar teaching areas so that more time can be spent on effective practices and learning.
5. Teachers should be able to create materials of various quality using information from the
internet, newspapers, magazines, and the Education Department’s references.
6. Before moving on to abstract ideas, teachers should include concrete and useful examples and
continue at a speed that is appropriate for the student’s learning abilities.
7. Teachers should use more teaching aids, games, and events to promote active participation
from students. They can also use information technology and all available teaching tools to
assist students.
It is not always possible to obtain perfectly consistent results. Because there are several factors
like physical health, memory, guessing, fatigue, forgetting etc. which may affect the results from
one measurement to other. These extraneous variables may introduce some error to our test scores.
This error is called as measurement errors. So while determining reliability of a test we must take
into consideration the amount of error present in measurement. Methods of Determining Reliability:
For most educational tests the reliability coefficient provides the most revealing statistical index of
quality that is ordinarily available. Estimates of the reliability of test provide essential information
for judging their technical quality and motivating efforts to improve them. The consistency of a test
score is expressed either in terms of shifts of an individual’s relative position in the group or in terms
of amount of variation in an individual’s score.
1. Relative Reliability or Reliability Coefficient:
In this method the reliability is stated in terms of a coefficient of correlation known as reliability
coefficient. Hence we determine the shifting of relative position of an individual’s score by
coefficient of correlation.
36
2. Absolute Reliability or Standard error of Measurement:
3. Split-Half Method:
There are also methods by which reliability can be determined by a single administration of
a single test. One of such method is split-half method. In this method a test is administered
to a group of pupils in usual manner. Then the test is divided into two equivalent values and
correlation for these half-tests are found. The common procedure of splitting the test is to take
all odd numbered items i.e. 1, 3, 5, etc. in one half and all even-numbered items i.e. 2, 4, 6,
8 etc. in the other half Then scores of both the halves are correlated by using the Spearman-
Brown formula.
2r1
r2 = (2.1)
1 + r1
Where,
r2 = Reliability coefficient on full test
r1 = Correlation of coefficient between half tests. For example, by correlating both the halves
we found a coefficient of .70. By using formula ((2.1)) we can get the reliability coefficient on
full test as:
2 × .70 1.40
r2 = = = .82 (2.2)
1 + .70 1.70
The reliability coefficient .82 when the coefficient of correlation between half test is .70. It
indicates to what extent the sample of test items are dependable sample of the content being
measured—internal consistency.
37
2.4.2 Validity
Validity is the most important characteristic of an evaluation programme, for unless a test is valid it
serves no useful function. Psychologists, educators, guidance counselors use test results for a variety
of purposes. Obviously, no purpose can be fulfilled, even partially, if the tests do not have a suffi-
ciently high degree of validity. Validity means truth-fullness of a test. It means to what extent the
test measures that, what the test maker intends to measure.
It includes two aspects:
What is measured and how consistently it is measured. It is not a test characteristic, but it refers to
the meaning of the test scores and the ways we use the scores to make decisions. Following definitions
given by experts will give a clear picture of validity.
Validity of an evaluation device is the degree to which it measures what it is intended to mea-
sure. Validity is always concerned with the specific use of the results and the soundness of our
proposed interpretation.
It is not also necessary that a test which is reliable may also be valid. For example, suppose a
clock is set forward ten minutes. If the clock is a good time piece, the time it tells us will be reliable.
Because it gives a constant result. But it will not be valid as judged by ‘Standard time’. This
indicates “the concept that reliability is a necessary but not a sufficient condition for validity.”
38
valid measure of algebra knowledge. Receive feedback on language, structure and layout
Professional editors proofread and edit your paper by focusing on:
• Academic style
• Vague sentences
• Grammar
• Style consistency
3. Face validity:
Face validity considers how suitable the content of a test seems to be on the surface. It’s similar
to content validity, but face validity is a more informal and subjective assessment.
Example:
You create a survey to measure the regularity of people’s dietary habits. You review the survey
items, which ask questions about every meal of the day and snacks eaten in between for every
day of the week. On its surface, the survey seems like a good representation of what you want
to test, so you consider it to have high face validity.
As face validity is a subjective measure, it’s often considered the weakest form of validity.
However, it can be useful in the initial stages of developing a method.
5. Usability:
Usability is another important characteristic of measuring instruments. Because practical con-
siderations of the evaluation instruments cannot be neglected. The test must have practical
value from time, economy, and administration point of view. This may be termed as usability.
So while constructing or selecting a test the following practical aspects must be taken
into account:
1. Ease of Administration:
It means the test should be easy to administer so that the general class-room teachers can use
it. Therefore, simple and clear directions should be given. The test should possess very few
subtests. The timing of the test should not be too difficult.
39
2. Time required for administration:
Appropriate time limit to take the test should be provided. If in order to provide ample time
to take the test we shall make the test shorter than the reliability of the test will be reduced.
Gronlund and Linn (1995) are of the opinion that “Somewhere between 20 and 60 minutes
of testing time for each individual score yielded by a published test is probably a fairly good
guide”.
5. Cost of Testing:
A test should be economical from preparation, administration and scoring point of view.
40
Developing Assessment Tools, Techniques & Strategies – II
3
Contents
3.1 CCE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.1.1 Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.1.2 Need . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.1.3 Aims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.1.4 Importance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.1.5 Relation with Formative Assessment . . . . . . . . . . . . . . . . . . . . . . . 43
3.1.6 Salient features of Formative Assessment . . . . . . . . . . . . . . . . . . . . . 43
3.1.7 Problems faced by Teachers and Students . . . . . . . . . . . . . . . . . . . . 43
3.2 Meaning & construction of process-oriented tools . . . . . . . . . . . . . . . . . . . . 44
3.2.1 Interview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.2.2 Inventories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.2.3 Observation schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.2.4 Check-list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.2.5 Rating scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.2.6 Anecdotal record . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.3 Assessment of group processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.3.1 Nature of group dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.3.2 Socio-metric techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.3.3 Steps for formation of group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.3.4 Criteria for assessing tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.3.5 Criteria for assessment of social skills in collaborative/ co-operative learning
situations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.4 Promoting Self-assessment and Peer assessment . . . . . . . . . . . . . . . . . . . . . 55
3.4.1 Self-assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.4.2 Peer assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.5 Portfolio assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.5.1 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.5.2 Uses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.5.3 Developing and assessing Portfolio . . . . . . . . . . . . . . . . . . . . . . . . 58
3.5.4 Developing of Rubric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.1 CCE
Concept, need, importance, relation with FA & problems faced
41
3.1.1 Concept
Continuous and comprehensive evaluation was a procedure of assessment directed by Right to edu-
cation act in 2009. The assessment was introduced by state government in India as well as by central
board of education in India, for students of sixth to tenth class. CCE is a system of school based
assessment and evaluation of students that covers all features of students’ development.
According to a CBSE, “it is a developmental process of assessment which emphasizes on two fold ob-
jectives: continuity in evaluation and assessment of broad based learning and behavioural outcomes.
According to this scheme the term continuous is meant to accentuate that evaluation of identified
aspects of students’ growth and development is a continuous process than an incident build into the
total teaching learning process and spread over a duration of academic session. The term compre-
hensive means that the scheme tries to cover both the scholastic and the co-scholastic aspects of
students’ growth and development.
3.1.2 Need
Help develop cognitive, psychomotor and affective skills
1. Develop students’ thinking processes and memory
2. Make continuous evaluation an integral part of the teaching-learning process
3. Use evaluation data for improving teaching-learning strategies
4. Utilise assessment data as a quality control device to raise academic outcomes
5. Enable teachers to make student-centric decisions about learners’ processes of learning and
learning environments
6. Transform teaching and learning into a student-centric activity.
3.1.3 Aims
1. To access every aspect of child during their presence at school.
2. CCE minimizes stress on students.
3. Make assessment regular and comprehensive.
4. provide a tool for detection and improvement.
5. provide learner with greater skill.
3.1.4 Importance
1. It helps the learners identify the challenges faced in education.
2. It is aimed at diagnosing the problematic areas in the development of children apart from their
academic results.
3. It increases the punctuality and regularity of the students. They would try to do their assign-
ments to their entire satisfaction.
4. It provides motivation to the students to work thoroughly with consistency without wasting
time.
5. It can be served as a basis to award scholarships and fee concessions.
42
3.1.5 Relation with Formative Assessment
Formative assessment is an active learning process that the teachers and the students continuously
and systematically improving students’ achievement. Teachers and their students actively engage in
the formative assessment process to focus on learning goals and take action to move closer to the
goal.
3. Provides the platform for the active involvement of students in their own learning.
5. Recognizes the profound influence assessment has on the motivation and self-esteem of students.
6. Recognizes the need for students to be able to assess themselves and understand how to improve.
8. Incorporates varied learning styles into deciding how and what to teach.
9. Encourages students to understand the criteria that will be used to judge their work.
11. Helps students to support their peers, and expect to be supported by them.
The problem is that, other than proprietary walled gardens, none of that feedback is collected
consistently and presented in a unified manner. This spring they catalogued formative assessment
products with a focus on those that were more authentic and open ended. It was disappointing to
find that most teachers still use spreadsheets to manually enter and track formative assessment data.
They spotted four problems:
1. Different standards
3. No agreement on competency
4. Inadequate tools
43
Digital learning and the explosion of formative data means the beginning of the end of weeklong state
tests. By using thousands of formative observations it will be increasingly easy to accurately track
individual student learning progressions. But making better use of the explosion of formative data will
require leadership and investment. This is an education problem more than a technology problem. It
would help if school networks agreed on competency-based protocols and used their market leverage
to drive investment to solutions. Thus the followings are lack in Formative Assessment:
The important aspect of interview is establishment of intimacy and to get response from respondent.
Thus, the interview is a process of communication or interaction in which the respondent delivers
the required information to the interviewer face-to-face. It is used effectively to collect the useful
information in many research situations.
When the researcher is extremely conscious about asking the questions in his presence to exhibit
his personal interactive objectives, the researcher uses this process of questioning which is called
interview. Here the information is collected from the people verbally with their physical presence.
The responses of the respondent are then collected by the interviewer in a separate sheet. It can be
conducted by the interviewer in person or in group. When the interviewer is conducted in group,
the size of the group should not be so large that it inhibits participation of most of the members and
at the same time if should not be so small that it lacks substantially greater converge than in the
individual interview. The optimum size is approximately 10-12 persons.
Social, intellectual and educational homogeneity is important for effective participation of all group
members. A circular seating arrangement, with the interviewer as one of the group, is conducive to
full and spontaneous reporting and participation. The interview can be conducted one or more times
as per requirement. As a tool for research interview is used as formal and informal, directional and
non-directional interview.
Characteristics of Interview:
1. It is social interaction
2. It is a sincere method
44
4. It involves various direct involvements of interviewer and respondent.
10. It involves the face-to-face involvement of the respondent and the interviewer.
Construction
The steps of interview include preparation of interview, execution of interview, note taking and anal-
ysis of the information.
Objectives of interview
In this step, the general aims of research are converted into specific objectives. The area, information
to be collected, the respondents and the type of interview is decided according to objective.
Proper training, guidance and experience assure a good interview. It is a chain of appropriate
questions and answers. The answers to the effective questions depend on the content, motivation,
attitudes, expectation of information, time of interview and ability of the interviewer to establish the
45
intimacy. The responses can be objective or subjective, special or general, free response or restricted.
After the careful evaluation and critical thinking of the above aspects the appropriate types of
questions are planned and a register is prepared whereby the investigator can use the appropriate
type of questions. It can be in the form of questions, fill in the blanks, rating scale, checklist etc.
The responses can be worked of accordingly.
Execution of Interview
The execution of interview means conducting the interview. As per the pre-plan whether be it the
personal or group interview, before starting the interview it is necessary to disclose the personal
identity and the objectives and type of interview. The investigator should bear the tape recorder,
camera, if necessary and the interview register. The instructions, if any and necessary is deliv-
ered to the respondents. The execution of interview included establishing the rapport and eliciting
information.
Establishing Rapport
To get the necessary, relevant, important and all the information related to the subject it is necessary
to gain the confidence of respondent and thus leading towards a good and successful interview. It is
necessary that the interviewer should be polite, well dressed, cool, calm, patient, decent and capable
of questioning and must bear good understanding. The investigator should himself be clear with
the questions and their responses and the objectives of interview. The investigator should be skilful,
positive, joyous, unbiased, capable, and free of any rational and bear the attitude of sympathy thus
establishing a good rapport with the respondents.
Note taking
The final step of the interview with the respondent used a paper sheet, predesigned answer sheet,
tape recorder or video recorder as per the requirement. Information is then minimized through
analysis. To note the complete information from the respondent various activities, skill and talent
could be used.
3.2.2 Inventories
Meaning
A concept inventory is a test to assess students’ conceptual understanding in a subject area.
It consists of multiple-choice questions in which several items are used to evaluate understanding for
each concept.
46
A key feature is that the items evaluate not simply whether a student gets an answer correct or
incorrect, but the nature or quality of the student understands. Each incorrect response option for
an item reflects a different type of understanding of the concept.
Characteristics of inventories
Questions will be direct.
There will be no specific correct answers.
2. The date when the observation is going to conducted is another common part of an observation
schedule. The date includes the number of days or the span of time needed to complete the
actual observation, the specific time of day when the observation will be done, as well as when
results of the observation done will be available.
3. The names of the individuals or groups that are involved in the particular observation activity.
These individuals or groups, include teachers or other members of the school faculty, students,
employees, managers, supervisors, the observer or elevator, etc.
4. The topic or the main focus of the said observation activity is also included, and often the goals
and objectives are written on top as a guide for both the observer and the one being observed.
5. There are also instructions or directions provided on some observation schedule templates,
especially if there are specific things that needs to be observed and collected in a particular
observation.
47
6. A legend or abbreviation for scoring or evaluation is made available in some observation schedule
templates to allow the users to provide the information that they have gathered in a uniform
way.
8. The list of specific and related observation tasks or activities involved in the observation being
conducted.
9. Tests and related questions that are being asked before, during and after the observation.
3.2.4 Check-list
It is one of the specific instruments for evaluation. Checklist is in the forms questionnaire. In this the
answers of the questions are given checklist can be used for self-evaluation or for other’s evaluation.
It exhibits if the student has any particular characteristics or not and thus helps in the evaluation
of the students.
Characteristics of Checklist: Checklist is used for evaluation of self and others. It is used
as an instrument of observation. It involves questions and its answers. It involves signs by the
respondent. It involves the characteristics about a particular subject to be evaluated. Construction
and Application of Checklist: The first horizontal line of the check list is used to write the name or
number of the subject under observation.
The characteristics of the subject or thing to be evaluated are arranged in vertical column of
the evaluation sheet with the corresponding blank options to place the tick mark in the adjacent
columns. Then the characteristics present in the subjects under observation are decided and if that
characteristic is present in the subject then the tick mark is placed in that column. Then after the
frequency of all tick mark is counted and marks are given to students on the basis of predefined
norms or standards. Then the percentage, mean, median or correlation is used.
Uses of Checklist:
4. To know the developmental direction of the specific behaviour pattern check list is used.
The technique of observation or the tool with the help of which the researcher or observer ob-
serves externally the amount of the various characteristics developed in a person and takes a note
of it methodologically is called rating scale. Here the evaluation is done in relation to their opinion.
48
Such a tool or instrument which converts the opinion into numbers is called rating scale. It can be
used to evaluate the personality traits, creative skills, individual or social adjustment etc.
49
Importance of Rating Scale
1. Any characteristic can be measured through rating scale.
2. It is helpful to evaluate the behaviour which other tools can hardly deal with.
5. The level of each characteristic of each student of the class can be known.
6. It is helpful to deliver all the necessary information related to the progress of students.
7. The rating scale is also useful for the measurement of other methods or techniques.
2. The interpretations and recommended action should be noted separately from the description.
4. The incident recorded should be that is considered to be significant to the students’ growth
and development of example.
Purpose
1. To furnish the multiplicity of evidence needed for good cumulative record.
2. To substitute for vague generalizations about students’ specific exact description of behaviour.
3. To stimulate teachers to look for information i.e pertinent in helping each student realize good
self- adjustment.
50
4. To understand individual’s basic personality pattern and his reactions in different situations.
7. It can be maintained in the areas of behaviour that cannot be evaluated by other systematic
method.
8. Helps the students to improve their behaviour, as it is a direct feedback of an entire observed
incident, the student can analyse his behaviour better.
Construction
10. Keep a notebook handy to make brief notes to remind you of incidents you wish to include in
the record. Also include the name, time and setting in your notes.
11. Write the record as soon as possible after the event. The longer you leave it to write your
anecdotal record, the more subjective and vague the observation will become.
12. In your anecdotal record identify the time, child, date and setting
14. Include the responses of other people if they relate to the action.
19. The teacher should have practice and training in making observations and writing records.
2. Group Dynamics: Group dynamics deals with the attitudes and behavioural patterns of a
group. Group dynamics concern how groups are formed, what is their structure and which
processes are followed in their functioning. Thus, it is concerned with the interactions and
forces operating between groups. Group dynamics is relevant to groups of all kinds – both
formal and informal.
3. Characteristics of a Group:
51
(c) Common fate (they will swim together)
(d) Common goals (the destiny is the same and emotionally connected)
(e) Face-to-face interaction (they will talk with each other)
(f) Interdependence (each one is complimentary to the other).
(g) Self-definition as group members (what one is who belongs to the group)
(h) Recognition by others (yes, you belong to the group).
4. Group Dynamics – 4 Important Characteristics
(a) Describes how a group should be organised and operated. This includes pattern of lead-
ership and coop-eration.
(b) Consists of a set of techniques such as role playing, brainstorming, group therapy, sensi-
tivity train-ing etc.
(c) Deals with internal nature of groups, their formation, structure and process, and the way
they affect individual members, other groups and the organisation as a whole.
(d) Refers to changes which take place within groups and is concerned with the interaction
and forces ob-tained between group members in a social setting.
5. The nature of Group Dynamics
(a) Orienting assumption
(b) Groups are Real
(c) Group processes are real
(d) Groups are more than the sum of their parts
(e) Groups are living systems
(f) Groups are influential
(g) Groups shape society.
52
4. Socio-metric technique is more useful with small groups. The position or status of the individual
is determined on the basis of some particular criterion.
Limitations
1. A data of socio-metric tests seems so different from other kind of data.
4. There are certain traits or qualities that are very difficult to be measured and if at all they are
measured through observations or other tools the measurement may not be accurate and free
from subjectivity.
(a) Members are discreet with their behaviour. Conflict, controversy, misunderstanding and
personal opinions are avoided even though members are starting to form impressions of
each other.
(b) This stage is characterized by members seeking either a work assignment (in a formal
group) or other benefit, like status, affiliation, power, etc. (in an informal group).
(c) At this stage, group members are learning what to do, how the group is going to operate,
what is expected, and what is acceptable.
(a) The storming stage is where dispute and competition are at its greatest because now group
members have an understanding of the work and a general feel of belongingness towards
the group as well as the group members.
(b) This is the stage where the dominating group members emerge, while the less confronta-
tional members stay in their comfort zone.
(c) The next stage in this group is marked by the formation of dyads and triads. Members
seek out familiar or similar individuals and begin a deeper sharing of self.
(a) In this stage, the group becomes fun and enjoyable. Group interaction are lot easier, more
cooperative, and productive, with weighed give and take, open communication, bonding,
and mutual respect.
(b) If there is a dispute or disruption, it’s comparatively easy to be resolved and the group
gets back on track.
(c) Group leadership is very important, but the facilitator can step back a little and let group
members take the initiative and move forward together.
53
4. Synergy (Performing Stage)
(a) At this stage, the morale is high as group members actively acknowledge the talents,
skills and experience that each member brings to the group. A sense of belongingness is
established and the group remains focused on the group’s purpose and goal.
(b) Members are flexible, interdependent, and trust each other. Leadership is distributive and
members are willing to adapt according to the needs of the group.
(a) This stage of a group can be confusing and is usually reached when the task is successfully
completed. At this stage, the project is coming to an end and the team members are
moving off in different directions.
(b) The group decides to disband. Some members may feel happy over the performance, and
some may be unhappy over the stoppage of meeting with group members. Adjourning
may also be referred to as mourning.
A group of students discussing a lecture or students from different schools working together over
the Internet on a shared assignment are both examples of collaborative learning.
Cooperative learning which will be the primary focus of this workshop, is a specific kind of
collaborative learning. In cooperative learning, students work together in small groups on a struc-
tured activity. They are individually accountable for their work, and the work of the group as a
54
whole is also assessed. Cooperative groups work face-to-face and learn to work as a team.
In order to create an environment in which cooperative learning can take place, three things are
necessary
During early childhood, SEL skills are organized around positive engagement with people and
the environment, managing emotions within social interactions, and remaining connected with adults
while successfully moving into the world of peers. These tasks can be difficult to navigate: young
children are often required to sit still or wait, attend, follow directions, approach group play, and get
along with others both at school and outside of school. SEL tasks then change radically for children
entering middle childhood. As children become aware of a wider social network, they learn to
navigate the sometimes - treacherous waters of peer inclusion, acceptance, and friendship. Managing
how and when to show emotion becomes crucial, as does knowing with whom to share emotionladen
experiences and ideas. Adolescents are expected to form closer relationships with peers; successfully
negoti ate a larger peer group and other challenges in the transition to middle and high school;
come to understand the perspectives of others more clearly than ever before; achieve emotional
independence from parents and other adults while maintaining relationships with them; establish
clear gender identity and body acceptance; prepare for adulthood; and establish a personal value
or ethical system and achieve socially responsible behaviour. In the academic realm, older children
and adolescents are required to become much more independent in their engagement with ever more
complex coursework, and to consider how their achievement is moving them toward independence.
SEL is therefore integral to a child’s development from preschool through adolescence and is often
related to his or her success in school.
5. They get to know their strength and weakness and the skills they have.
6. In this the students should assess both the process and the product of their learning. While
the assessment of the product is often the task of the teacher but it encourages student their
own work to understand their learning process.
55
Drawbacks
1. Can be subjective since student may not be sincere and may over evaluate themselves.
3. Students may not interpret the criteria properly. Results may not be accurate if they don’t
know the criteria properly.
Self-assessment is different from self-grading. It uses the evaluative processes in which judgment is
involved, where self-grading is marking of one’s work by the instructor. Students may initially resist
attempts to involve due to insecurities or lack of confidence in their ability to objectively evaluate
their own work.
1. Empower students to take responsibility for and manage their own learning.
6. Students can learn how they learn, what others expect, and what areas they should work on
to improve.
7. Students are actively engaged in learning and they may enhance learning.
Drawbacks
1. Can be negatively affected by group collision.
Students can use peer assessment as a tactic of antagonism or conflict with other students by
giving unmerited low evaluation. They may give favourable evaluations to their friends.
Students can occasionally apply unsophisticated judgments to their peers. Ex: Students who are
shy, reserved and quieter may get low grades.
56
3.5 Portfolio assessment
It is a purposeful collection of student work that exhibits the students’ efforts, progress and achieve-
ment in one or more areas. The collection must include student participation in selecting contents,
the criteria for selection, the criteria for judging merit and evidence of student self-reflection.
3.5.1 Scope
Portfolio assessment enables students to reflect their real performance, to show their weak and strong
domain and to observe student’s progress during the learning process, and encourages students to
take responsibilities for their own learning. Since portfolio enable collecting information from dif-
ferent source such as students’ parents, friends, teachers, and him self, it provides teachers to have
reliable information about student. They are important tools for assessment of students’ learning
products and process.
Thus, portfolio has a potential which enables students to learn during assessment and to be as-
sessed during learning (to assess for learning and to assess of learning). Therefore, it should be
exactly applied in primary education for different courses such as Science and Technology, Mathe-
matics, Social Science to observe the students’ progress during the learning process and to provide
the required assistance depending on their performances.
3.5.2 Uses
1. Portfolio assessment matches assessment to teaching.
2. It has clear goals. In fact, they are decided on at the beginning of instruction and are clear to
teacher and students alike.
4. It is a tool for assessing a variety of skills not normally testable in a single setting for traditional
testing.
7. Develops social skills. Students interact with other students in the development of their own
portfolios.
57
3.5.3 Developing and assessing Portfolio
A portfolio assessment can be an examination of student selected samples of work experiences and
documents related to outcomes being assessed, and it can address and support progress toward
achieving academic goals, including student efficacy. Portfolio assessments have been used for large-
scale assessment and accountability purposes, for purposes of school-to-work transitions, and for
purposes of certification.
Portfolio assessments grew in popularity in the United States in the 1990s as part of a widespread
interest in alternative assessment. Because of high-stakes accountability, the 1980s saw an increase
in norm-referenced, multiple-choice tests designed to measure academic achievement. By the end of
the decade, however, there were increased criticisms over the reliance on these tests, which opponents
believed assessed only a very limited range of knowledge and encouraged a ”drill and kill” multiple-
choice curriculum. Advocates of alternative assessment argued that teachers and schools modelled
their curriculum to match the limited norm-referenced tests to try to assure that their students did
well, ”teaching to the test” rather than teaching content relevant to the subject matter. Therefore, it
was important that assessments were worth teaching to and modelled the types of significant teaching
and learning activities that were worthwhile educational experiences and would prepare students for
future, real-world success.
Involving a wide variety of learning products and artefacts, such assessments would also enable
teachers and researchers to examine the wide array of complex thinking and problem-solving skills
required for subject-matter accomplishment. More likely than traditional assessments to be multi-
dimensional, these assessments also could reveal various aspects of the learning process, including
the development of cognitive skills, strategies, and decision-making processes. By providing feedback
to schools and districts about the strengths and weaknesses of their performance, and influencing
what and how teachers teach, it was thought portfolio assessment could support the goals of school
reform. By engaging students more deeply in the instructional and assessment process, furthermore,
portfolios could also benefit student learning.
Developing a rubric is a dynamic process. As the goals of instruction become clearer to the
teacher, the ability to define ranges and levels of execution within the processes of the active learning
experience will make the development of a rubric easier. Some teachers may require a ”run-through”
before they are ready to finalize a rubric With unfamiliar content it’s OK to write a rubric after the
fact and save it for future reference Even after a rubric is used, it may need modification.
• List the concepts and rewrite them into statements that reflect both cognitive and performance
components.
• Identify the most important concepts or skills being assessed in the task.
58
• On the basis of the purpose of the task, determine the number of points to be used for the
rubric (example: 4-point scale or 6-point scale)
• Starting with the desired performance, determine the description for each score, remembering
to use the importance of each element of the task or performance to determine the score or
level of the rubric .
• Compare student work to the rubric Record the elements that caused you to assign a given
rating to the work.
• Revise the rubric descriptions based on performance elements reflected by the student work
that you did not capture in your draft rubric.
• Rethink your scale Does a []-point scale differentiate enough between types of student work to
satisfy you?
• Adjust the scale if necessary Reassess student work and score it against the developing rubric.
59
Analysis, Interpretation, Reporting and Communicating of
4
Students’ performance
Contents
4.1 Interpreting students’ performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.1.1 Descriptive statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.2 Grading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.2.1 Concept of Grading System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.2.2 Merits of Grading System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.2.3 Types of Grading System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.3 Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.3.1 Characteristics of a Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.3.2 Merits of Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.3.3 Demerits of Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.4 Reporting students’ performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.4.1 Progress reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.4.2 Cumulative records . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.4.3 Profiles and their uses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.4.4 Portfolios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.4.5 Using descriptive indicators in report cards . . . . . . . . . . . . . . . . . . . . 70
4.4.6 Role of feedback to stake holders (students, parents, teachers) . . . . . . . . . 70
4.5 Identifying Strengths and weaknesses of Learners . . . . . . . . . . . . . . . . . . . . 72
5.1 Nov/Dec-2018 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.2 December-2019 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
60
1. Mean: The mean of distribution is commonly understood as the arithmetic average. It is
computed by dividing sum of all the scores by the number of measures. The formula is,
Σx
M=
N
where,
M = Mean
Σ = Sum of
X = Score in the distribution
N = Number of measures
2. Median: Median is the middle most point of a distribution or it is a midpoint in the given series.
In other words, in the distribution the half of the values lies below and above the midpoint. It
is measure of position rather than of magnitude. It is the 50th percentile point in the given
distribution. 1) When the number is odd: If we have an odd score and if no scores are repeated,
the median is the middle value. 2) When the number is even: If we have the even scores, the
median will become the average of the two middle values.
3. Mode: The mode is defined as the element that appears most frequently in a given set of
elements. Using the definition of frequency given above, mode can also be defined as the
element with the largest frequency in a given data set. For a given data set, there can be more
than one mode. As long as those elements all have the same frequency and that frequency is
the highest, they are all the modal elements of the data set.
1. The Mean is the most stable measure of the central tendency, easy to understand and easy to
calculate. It lakes all values of data into consideration. It is the best measure to estimate the
population values from sample values.
2. The Median is the middle most point in the distribution, also the best measure of central
tendency and when extreme scores affect the mean, the best measure is the median and also
when the measure of the distribution is open ended, i.e., when the lower limit of the lowest
class 195 interval and upper limit of the highest class interval is not known. But like the mean,
the median cannot be subjected to mathematical operations.
3. The Mode is the easiest measure of central tendency to calculate and understand. It can
be identified by simple observation. It corresponds to the highest frequency of the frequency
which occurred more frequently in the distribution mode also cannot be applied for mathemat-
ical operations like the median. Mode can be used with nominal, ordinal and interval scales
of measures. Where mean and median can be used with interval or ratio scales of measurements.
Measures of variability
The average of the standard deviation of the measures from their mean is known as the variance.
i.e.,
Σx
σ2 =
N
σ 2 = Variance of the sample.
x = Deviation of the raw scores from the mean.
61
N = Number of measures.
The measures of central tendency give us the single central value representing their entire data
but fail to represent the deviations of the values in the distribution. We cannot make out anything
about the internal structure of the distribution. That is, how the scores are spread or scattered in a
distribution from a given point of measures of central tendency. It is, therefore, necessary to study
the variability of the scores in the distribution. In order to give a better shape to the given data it
is not enough to the find out the measures of central tendency but necessary to make detailed study
of the variability of the given data. These measures are known as second order measure, based on
the first order measures of mean, median and mode.
1. Standard deviation is the most stable index of variability and used in research and experi-
mental studies. Very often this measure is used in all interpretations without which it is not
possible to predict the given data. It differs from the mean deviation in several respects. In
standard deviation we avoid the difficulty of signs by squaring the separate deviations and again
the squared deviation is used in computing this measure. The standard deviation is taken from
the mean and not from the median or mode. Therefore, the standard deviation is called the
root mean squared deviation and represented by the Greek Letter Sigma (σ).
(Find mean; subtract from X and square it; add X 2 and divide by N and take the square root).
2. Quartile Deviation: Mainly we consider three such quartiles which are denoted by Q1 , Q2
th
and Q3 . Q1 is called the first quartile, where 41 of the measures lie in the distribution below
it. Q2 is called the second quartile, and it is nothing but the median measure of a distribution.
where half of the measures lie in the distribution below it. Q3 is referred as the third quartile
th
or the upper quartile which divides the distribution in such a way that 34 of the measures lie
below that point. The quartile deviation Q is one- half of the scale distance between the 75th
percentile Q3 and 25th percentile Q1 in a frequency distribution. Thereby quarter deviation Q
is found from the formula.
Q3 − Q1
Q=
2
62
• Not affected by extreme scores or outliers.
• Does not take into account all the scores in the distribution.
• Does not play a role in advanced statistical procedures.
3. Range: Range is the simplest measure of variability. It is easy to understand and simple to
compute. It is the difference between the highest and the lowest scores of the distribution. It
is the most general measure of variability and it is computed when we wish to make a rough
comparison of two or more groups for variability. Range takes into account of the extremes
of the series of the scores only and it is very unreliable measures of variability. Because it
considers only the highest and lowest scores in the series and except these two score do not
reveal anything about other scores in the series.
• Easy to calculate.
• Can be used with ordinal as well as interval/ratio data.
• Encompasses entire distribution.
• Depends on only two scores in the distribution and is therefore not reliable.
• Cannot be found if there are undeterminable or open-ended scores at either end of the
distribution.
• Plays no role in advanced statistics.
4. Rank correlation: We do not have the actual scores of students on an examination, but we
have only ranks or we are dealing with data which are heterogeneously distributed and the
scores are not very meaningful in such situation, we have to determine the correlation coeffi-
cient between the given variables the best method is to apply the Spearman Ranks Difference
Correlation Coefficient. Condition: p (Rho)- Date is in ranks or capable of being ranked for
both the variables X & Y. i. Calculation of Rank Difference Correlation The rank difference
method of coefficient of correlation stated by Spearman can be calculated by the following
formula.
6σD2
P =1−
N (N 2 − 1)
where,
N = Number of pairs
P = Rank difference correlation coefficient
D = Difference between two ranks assigned to the individual
It is the best coefficient correlation especially when the number of cases is less than 30 and the
data is in ranks or capable of being ranked.
63
• This measure is especially useful when quantitative measures for certain factors cannot
be fixed but the individuals in the group can be arranged in order.
• A knowledge of this is helpful in educational and vocational guidance, prognosis, in the
selection of workers in office or factory and in educational decision making.
Graphical representation
1. Histograms: Another way of presenting the data by means of a graph is Histogram. His-
togram presents an accurate picture of the relative positions of the total frequency from one
interval to the other interval. The frequencies within each interval of Histogram are presented
by a rectangle, the base of which equals the length of the interval and height of which equals
the numbers of the scores within a given interval are presented by the midpoint of the class
interval. Whereas in as Histogram the scores are assumed to be spread uniformly over the
entire interval, the area of each rectangle is directly proportional to number of measures in
the interval. The other type of presenting the data is column diagram. Construction of His-
togram: The illustration, below, is a histogram showing the results of a final exam given to a
hypothetical class of students. Each score range is denoted by a bar of a certain colour. If this
histogram were compared with those of classes from other years that received the same test
from the same professor, conclusions might be drawn about intelligence changes among stu-
dents over the years. Conclusions might also be drawn concerning the improvement or decline
of the professor’s teaching ability with the passage of time. If this histogram were compared
with those of other classes in the same semester who had received the same final exam but who
had taken the course from different professors, one might draw conclusions about the relative
competence of the professors.
Some histograms are presented with the independent variable along the vertical axis and the
dependent variable along the horizontal axis. That format is less common than the one shown
here.
• Draw horizontal line at the bottom of a graph paper along which mark off units to represent
the class intervals better to start with class interval of lowest value.
• Draw a vertical line through the extreme end of the horizontal axis along which mark off
units to represent the frequencies of the class intervals. Choose a scale which will make
the largest frequency (the height of the y-axis) of the polygon approximately 75the x-axis.
• Draw rectangles with class units as base, such that the areas of rectangles are proportional
to the frequencies of the corresponding class intervals.
Uses: Histogram is the most popular graph used to represent continuous frequency distri-
bution. The width of the height of the rectangle are proportional to the length of the class
intervals, the graph thus formed by a series of such rectangles adjacent to one another is called
histogram. Thus the area of the histogram is proportional to the total number of frequencies
spread on all the class intervals.
2. Frequency curves: Cumulative Frequency Curve is called Ogive Curve. We convert the cu-
mulative frequencies into cumulative percentage frequencies and then plotting the graph with
the cumulative percentage frequencies corresponding to the class interval is what is called the
ogive. This curve differs from the cumulative frequency graph. In that cumulative frequency
graph frequencies are not graph to be expressed in the form of cumulative percent. Therefore,
in this graph the ogive the cumulative percent can be calculated by dividing each cumulative
64
frequency is going to be plotted. The conversion of cumulative frequencies into cumulative
percent can be calculated by dividing each cumulative frequency by N and multiplying by 100.
The Cumulative Frequency Curve is drawn in the same manner as that of the frequency polygon.
• Draw horizontal line at the bottom of a graph paper along which mark off units to represent
the class interval.
• Draw a vertical line through the extreme end of the horizontal axis along which mark off
the cumulative percentages corresponding to each class interval. Choose the scale again
which will make the 75% width of the axis. Join the points and so as to get the ogive as
shown in the figure.
Uses:
• Percentiles and percentile ranks may be determined quickly and accurately from the ogive
when the curve is carefully drawn and the scale divisions are precisely marked.
• A useful overall comparison of two or more groups is provided when Cumulative Frequency
Curve representing their scores is plotted upon the same horizontal and vertical axis.
• Percentile norms are determined directly from Cumulative Frequency Curve.
4.2 Grading
4.2.1 Concept of Grading System
The usual practice of assessment in schools is through conducting examinations. One of the major
drawbacks of our examination system is reporting students’ performance in terms of marks. In order
to minimize the limitations of present day examinations system, a major reform concerns transform-
ing the marking system into a grading system.
Grading is a process of classifying students based on their performance into groups with the help
of predetermined standards, expressed in a symbolic form i.e., letters of English alphabet. As these
grade and corresponding symbols are pre-determined and well defined, all the stakeholder would
understand them uniformly and consistently. While developing the grading system, it is of utmost
significance that the meaning of each grading symbol be clearly spelt out. In spite of strict ad-
herence to the pre – determined stipulations, there may be inter examiner and intra – examiner
variations. Sometimes the grade awarded may be compared within and between groups. In this
type of comparison not only the grades awarded by a particular teacher but also the grades awarded
by different teachers would be compared. This helps in ascertaining the position of students with
reference to a group. Comparing grades awarded by a single teacher (intra-group) and by, different
teacher (inter-group) with reference to a larger group is considered as norm-referenced. This would
help in location the position of a student in a larger group. Hence, norm-referenced measures would
help in comparing the grades awarded by different teachers and institutions. Thus, the grades may
be used for communicating the students’ performance with reference to specified criteria and also
the relative position of students with reference to their peer group.
65
to examine the process of learning. They help teachers to locate learning variations among children.
Examinations also aim at helping children estimate their learning performance and accordingly im-
prove their proficiencies. But these idealistic purposes of examinations have taken a back seat.
Securing marks rather than improving the levels of that attainment has become the main objective
of students. Teaching is a deliberate process of achieving instructional objectives and evaluation
is a means of estimating the extent of their accomplishment. But due to the prevalence of marks
consciousness, attainment of marks rather than assessment of instructional objectives has become all
important.
• As grading involves grouping the students according to their attainment levels, it helps in
categorizing the students as per their attainments of instructional objectives also.
• One of the significant arguments in favour of the grading system is that it creates favourable
conditions for classification of students’ performance on a more convincing and justifiable scale.
• In order to understand why grading is a better proposition than the marking system, it is
necessary to look closely into the various procedures of scaling.
• Grading is a far more satisfactory method than the numerical marking system.
• The justification for the superiority of grading system over marking system is that it signifies
individual learner’s performance in the form of a certain level of achievement in relation to the
whole group.
• DIRECT GRADING: The process of assessing students’ performance qualitatively and express-
ing it in terms of letter grades directly is called direct grading. This type of grading can be used
for assessment of students’ performance in both scholastic and co- scholastic areas. However,
direct grading is mostly preferred in the assessment of co-scholastic learning outcomes. While
evaluation co-scholastic learning outcomes, the important factors are listed first and then a
student’s performance is expressed in a letter grade. This type of grading minimizes inter-
examiner variability and is easy to use when compared to indirect grading. Direct grading has
a limitation that it does not have transparency and diagnostic value and does not encourage
competition to the extent required.
• ABSOLUTE GRADING: Let us now examine the methodology of awarding grades in terms
of absolute standards. As has been pointed out earlier, absolute grading is based on a pre-
determined standard that becomes the reference point for students’ performance. In absolute
grading, the marks are directly converted into grade on the grades on the basis of a pre-
determined standard.
66
Absolute grading can be on a three- point, five- point or nine point scale for primary, up-
per primary and secondary stages respectively.
– Three-Point Scale: Students are classified into three groups as above average, average and
below average on the basis of pre-determined range of score as shown in below table.
– Five- Point Scale: Students are classified into five groups, distinction, first division, second
division, third division and unsatisfactory on the basis of pre-determined range of score
as shown in below table.
– Nine- Point Scale: In absolute grading the range of absolute marks or percentage of marks
need not necessarily be of equal size. The range of marks as a pre-determined standard for
classifying students into different groups may be taken as arbitrary. In a ninepoint grading
scale, the students may be classified into nine groups, namely, outstanding, excellent, very
good, good, above average, below average, marginal and unsatisfactory. An example of
nine-point absolute grading is provided in below table.
4.3 Norms
It is a preliminary test for comparing achievement of an examinee to a large group of examinees at the
same grade. The representative group is known as Norm group. Norm referenced test is a test design
to provide a measure of performance that is interpretable in terms of an individual’s relative standing
in some known group. Norm group may be made up of examinees at the local level, district level,
state level or national level. Since the development of norm-referenced tests is expensive and time
consuming. Bormuth (1970) writes that Norms is to measure the growth in a student’s attainment
and to compare his level of attainment with the levels reached by other students and norm group.
• It classifies achievement as above average, average or below average for given grade.
• It is generally reported in the form of Percentile Rank, Linear Standard Score, Normalized
Standard Score and grade equivalent.
67
4.3.2 Merits of Norms
• To make differential predictions in aptitude testing.
• To get a reliable rank ordering of the pupils with respect to the achievement
• To identify the pupils who have mastered the essentials of the course more than others.
• There is lack of congruence between what the test measures and what is stressed in a local
curriculum.
• This promotes unhealthy competition and injurious to self-concepts of low scoring students.
Norm-referenced measurement is the traditional class based assignment. The measurement act
relates to some norm, group or a typical performance. It is an attempt to interpret the test results
in terms of performance of a certain group of students. So, this group is a norm group test scores.
Thus norm-referenced test typically attempts to measure more general category of competencies.
Because reporting student progress serves a variety of purposes, we believe no one method of reporting
is capable of serving all purposes well. A multi-faceted comprehensive reporting system is essential.
Multiple means of reporting progress is divided into two subsets, individual and whole school reports.
Within these subsets, the means for reporting may include but are not limited to:
68
1. Individual Subset - report cards, progress reports, standardized testing, evaluated projects
and assignments, portfolios and exhibitions of student work, homework, individual web pages,
parent-teacher conferences, student-teacher conferences and student led conferences.
2. Whole School Subset- Standardized testing, open houses, classroom and school-wide newslet-
ters, each means of reporting on student progress will include a statement of purpose. The
statement of purpose may vary according to the specific type of reporting taking place and the
audience it is directed toward.
2. Physical development
3. Health matters
5. Special achievements
6. Personal details
• Focus on knowing your students and helping students know themselves: Before
diving into selecting a template, think about what learner profiles are for and how you will use
them. A template that is created just for you as the teacher is very different than a template
that is designed to help students understand themselves as learners.
• Think differently about data: Learner profiles can be an entirely new take on the idea of
data notebooks. Why not let students use these to track their own progress, reflect on their
learning styles and strengths, and set individual academic and nonacademic goals?
• Give the work back: Learner profiles do not need to be one more thing you have to do as a
teacher. You don’t have to create 30 binders. Think about how you could help students create
their own learner profiles.
• Revisit and revise: Over the course of the year, students are going to change and grow.
Allow space for them to record self-reflections on a regular basis.
69
4.4.4 Portfolios
• Portfolios remain quite popular in education coursework and with administrators evaluating
senior teachers. One reason might be that the portfolio is a very subjective form of assessment.
For anyone uncomfortable without a grading key or answer sheet, subjective evaluation can be
a scary task. Secondly, teachers often are unsure themselves of the purpose of a portfolio and
its uses in the classroom. Third, there is a question of how the portfolio can be most effectively
used to assess student learning.
• It also is important – especially if you plan to use the portfolio as a major grade for your course
– that you get another teacher to help with the evaluations. That ensures that your assessment
is reliable. Teachers often cut some slack for less academically inclined students, while holding
others to higher standards. The two scores then can be averaged to get a final grade. That will
show you and the student a more accurate assessment of their work products. Finally, student
involvement is very important in the portfolio process. It is vital that students also understand
the purpose of the portfolio, how it will be used to evaluate their work, and how grades for it
will be determined.
• For teachers who are not proficient in the language, the descriptive indicators provide an
appropriate choice of words.
• Teachers say that earlier parents of students who are weak in academics would get angry and
disappointed on seeing their ward’s report card, since the remarks were mainly on academics.
• When teachers mark students as average or poor, there is no systematic scaling. So, teachers
should give encouraging observations.
• Parents too seem to be happy. Rather than receiving feedback about academics alone, we are
now able to understand where my children stand in each area and identify their strengths.
• However, some teachers are of the opinion that such a system is time consuming and would
have a “converse impact” on the child. The students would not be able to take criticism in the
right sense, they say.
70
• Students- Feedback is any response made in relation to students’ work or performance. It can
be given by a teacher, an external assessor or a student peer. It is usually spoken or written.
Feedback is most effective when it is timely, perceived as relevant, meaningful and encouraging,
and offers suggestions for improvement that are within a student’s grasp. It is intended to ac-
knowledge the progress students have made towards achieving the learning outcomes of a unit.
Good feedback is also constructive, and identifies ways in which students can improve their
learning and achievement. Providing a mark or a grade only, even with a brief comment like
”good work” or ”you need to improve” is rarely helpful. Here are some common examples of
feedback that is not helpful to students. It is widely recognized that feedback is an important
part of the learning cycle, but both students and teachers frequently express disappointment
and frustration in relation to the conduct of the feedback process. Students may complain that
feedback on assessment is unhelpful or unclear, and sometimes even demoralizing. Addition-
ally, students sometimes report that they are not given guidance as to how to use feedback to
improve subsequent performance. Even worse, students sometimes note that the feedback is
provided too late to be of any use or relevance at all. For their part, lecturers frequently com-
ment that students are not interested in feedback comments and are only concerned with the
mark. Furthermore, lecturers’ express frustration that students do not incorporate feedback
advice into subsequent tasks.
– Promote dialogue and conversation around the goals of the assessment task
– Emphasize the instructional aspects of feedback and not only the correctional dimensions.
– Remember to provide feed forward indicate what students need to think about in order
to bring their task performance closer to the goals
– Specify the goals of the assessment task and use feedback to link student performance to
the specified assessment goals
– Engage the students in practical exercises and dialogue to help them to understand the
task criteria
– Engage the students in conversation around the purposes of feedback and feed forward
– Design feedback comments that invite self- evaluation and future self- learning manage-
ment
– Enlarge the range of participants in the feedback conversation - incorporate self and peer
feedback
• Parents- A review process of the new reporting resources was carried out with a number of
schools. Schools that reviewed the materials found them useful and easy to follow. They be-
lieved that the materials signalled a desirable paradigm shift in reporting to parents.
In particular, the following aspects of the materials were highly valued by schools:
71
– There is a big gap between what schools are providing in the way of feedback and what
parents actually want.
– Parents don’t feel they are getting the right information in a timely manner to support
and coach their children
– Parents commented that the feedback they currently receive is too late to action as the
moment in time has passed.
– Parents prefer reporting based on their child’s progression rather than measurement
against a benchmark (despite popular belief). This reflects the need for progressive report-
ing using a method such as the Hattie feedback and reflection model. Parent Involvement
– Parental involvement decreases dramatically as a child progresses through education.
– Other family support decreases dramatically as a child progresses through education.
– Schools which integrate social activities and teamwork into the curriculum (not just by
making the kids play sport) have happier parents/students.
– Students who participate in task reflection with their parents on a weekly basis are more
likely to be a grade average student than students that participate in task reflection on a
less frequent basis.
• Administrator- To assess student progress toward the established district standards and to
facilitate the planning of various types of instruction, administration should ensure that teach-
ers are utilizing information from a variety of valid and appropriate sources before they begin
planning lessons or teaching. This could include data regarding students’ backgrounds, aca-
demic levels, and interests, as well as other data from student records to ascertain academic
needs and to facilitate planning appropriate initial learning. It is important for the adminis-
tration to note that information regarding students and their families is used by the staff for
professional purposes only and is kept confidential as a matter of professional ethics. Adminis-
trators should determine if teachers are using the numerous formative and summative diagnostic
processes available to assist in planning meaningful instruction. Formative measures include
ongoing teacher monitoring of student progress during the lessons, practice sessions, and on
daily assignments. Measures administered periodically like criterion-referenced tests, grade
level examinations or placement tests that are teacher-made or part of districtadopted mate-
rial, also provide helpful information on the status of student learning as instruction progresses.
Summative measures like minimum competency examinations, district mastery tests and stan-
dardized tests provide a different perspective from the ongoing formative measures. This 145
type of data enables the teacher to evaluate the long-term retention rate of their students and
to compare student learning on a regional, state or national basis. The administrators should
verify that teachers are preparing and maintaining adequate and accurate records of student
progress. This will include the regular and systematic recording of meaningful data regarding
student progress on specific concepts and skills related to the standards for each subject for
the grade level or course they are teaching. Once students’ success levels have been identified
from the records, the teacher should use the information to plan instruction and any necessary
remediation and enrichment. By utilizing ongoing information on achievement, teachers can
maintain consistent and challenging expectations for all students. Students and parents should
be informed of the students’ progress toward achieving district goals and Objectives through
comments on individual work, progress reports, conferencing, report cards and other measures.
Students should be encouraged to participate in self-assessment as a way of motivating students
to improve academic achievement.
72
frequently, despite the teacher’s experience.
To help a struggling student, it may be best to analyze the student’s strengths and weakness. This
requires that the student feel comfortable with the teacher and is able to express his feelings clearly.
After analyzing the student’s strengths and weaknesses, a teacher can develop a plan to help him.
Chose a comfortable setting to talk the student. Avoid having the discussion around the student’s
peers as students are often shy and nervous about revealing their feelings around their friends. Indi-
vidual attention may work best.
Start a general conversation. Ask how she is and if she’s having any problems. These questions
may reveal strengths and weaknesses on their own.
Ask the student where he excels. Have him elaborate on this skill. Ask detailed questions. Take
notes as the student speaks, but maintain eye contact to avoid alarming him. Move to a tangential
subject or skill after the student has described the first thing at which he excels. For example, if the
student says he likes computers, ask him about specific software.
Ask her about what she thinks she could improve. Avoid overtly criticizing her. Instead, prompt
her for self-criticism. Take notes. These criticisms will often be weaknesses.
73
Previous Years’ Question Papers
5
5.1 Nov/Dec-2018
1. (a) Explain the terms - Assessment, Evaluation and Examination.
(b) Classify different forms of assessment based on the purpose and define each one of them.
[3+7=10]
OR
2. (a) Distinguish between assessment for learning ’and assessment of learning with suitable
illustrations.
(b) Briefly summarize the recommendations of NCF-2005 on assessment and evaluation. [5+5=10]
3. (a) Describe the steps involved in planning and construction of an achievement test.
(b) Explain the characteristics of a good test. [6+4=10]
OR
4. (a) Discuss the concept of cognitive, affective and psychomotor domains of learning in assess-
ment and evaluation.
(b) Describe the guidelines for constructing various objective type items with suitable illus-
trations. [5+5=10]
5. (a) Examine the importance of assessing students performance continuously and comprehen-
sively.
(b) What are anecdotal records? Explain how it is useful for a classroom teacher. [5+5=10]
OR
6. (a) What are anecdotal records? Explain how it is useful for a classroom teacher.
(b) Discuss the need of assessing social qualities of students in a classroom. Explain the
procedure adopted for the same. [4+6=10]
7. (a) Explain the meaning of different measures of central tendencies with their uses for a
classroom teacher.
(b) Describe the different procedures used in reporting student’s performance in classroom.
[6+4=10]
74
OR
Class interval 0-9 10-19 20-29 30-39 40-49 50-59 60-69 70-79 80-89 90-100
Frequency 2 3 4 8 9 11 15 13 10 5
(b) Discuss different types of grades and their uses with suitable illustrations. [7+3=10]
5.2 December-2019
1. (a) Explain briefly the purpose of assessment and evaluation in classrooms
(b) How can assessment and evaluation ensure the quality of education? [5+5=10]
OR
2. (a) Describe the relationship among educational objectives, learning experiences and evalua-
tion.
(b) What is validity? Explain the methods of estimating the validity of an achievement test.
[5+5=10]
(a) How does the table of specifications ensure the quality of an achievement test?
(b) Elucidate the steps involved in the construction of a diagnostic test
(c) Explain the criteria used for assembling the test items. [2+3+5=10]
4. What are process oriented tools and techniques? Explain any one of them with suitable illus-
trations. [5+5=10]
75
(a) Describe different measures of variability. Explain the steps involved in computing the
best measure of variability with suitable illustrations.
(b) Discuss the different procedures adopted for reporting student’s performance in class-
rooms.
[5+5=10]
OR
22, 11, 14, 23, 09, 30, 40, 35, 08, 23, 17, 31, 27, 45, 31, 50
7. Discuss the steps involved in computing the rank correlation. Find out the spearman’s rank
correlation for the following data.
Scores A B C D E F G H I
Judge - I 25 15 30 40 25 50 45 10 35
Judge - II 30 20 15 35 20 50 40 20 25
[2+3+5=10]
76