El fenix_Edu_AL

Download as pdf or txt
Download as pdf or txt
You are on page 1of 80

An Attempt by El Fénix to make your Learning Easy

Assessment
of Learning

El Fénix Series
This book is free and open-source

If you come across wrong facts or calculations or any other kind of error(s)in this book, please mail
the same to latexbook12@gmail.com alongwith book (subject) name and page number. Your co-
operation is highly appreciated.

El Fénix Series is an Initiative from group of students of Regional Institute of Education Mysore.
Our team is working on lot of projects to help the student community of RIE Mysore. We want you
to make proper use of the Educational Materials we prepare, and help us to help you.
Take care friends.

Team El Fénix
K Shania Kariappa (Book Incharge, Assessment of Learning)
Karthik V Pai (Founder, El Fénix)
Jayaprakash H M (Founder, El Fénix)
Vaishnav Sankar K (Founder, El Fénix)
Rohit Raj (Editor)
Ritik Roshan Mohanty (Editor)
Kirthik R (Editor)
Jyotirmayee Swain (Proof Reader)
Contents

1 Introduction to Assessment & Evaluation 1

2 Developing Assessment Tools, Techniques & Strategies – I 20

3 Developing Assessment Tools, Techniques & Strategies – II 41

4 Analysis, Interpretation, Reporting and Communicating of Students’ performance 60

5 Previous Years’ Question Papers 74


Introduction to Assessment & Evaluation
1
Contents
1.1 Concept of Test, Measurement, Assessment, Examination, Appraisal and Evaluation
in Education and their Inter-relationships . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 Meaning of Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Meaning of Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.3 Meaning of Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.4 Meaning of Examination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.5 Meaning of Appraisal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.6 Meaning of Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.7 Interrelation among Assessment, Evaluation and Measurement . . . . . . . . . 3
1.2 Purpose and Objectives of Assessment/ Evaluation – for Placement, Providing Feed-
backs, Grading Promotion, Certification, Diagnostic of Learning Difficulties . . . . . . 3
1.2.1 Purpose of Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.2 Importance of Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Importance of assessment & evaluation for quality education – as a tool in Pedagogic
decision making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Forms of Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4.1 Prognostic Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4.2 Formative Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4.3 Diagnostic Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4.4 Summative Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4.5 Placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.6 Norm Referenced . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.7 Criterion referenced based on purpose . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 Teacher made tests, Standardized tests: based on nature & scope . . . . . . . . . . . 10
1.6 Oral and Written performance: based on mode of response . . . . . . . . . . . . . . . 10
1.7 Based on Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.7.1 Internal Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.7.2 External Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.7.3 Self-Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.7.4 Peer Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.7.5 Group Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.8 Based on nature of information gathered . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.8.1 Quantitative Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.8.2 Qualitative Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.9 CCE - School Based Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.9.1 School Based Assessment (SBA) . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.9.2 Standard Based Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

1
1.10 Recent trends in Assessment and Evaluations . . . . . . . . . . . . . . . . . . . . . . 14
1.10.1 Assessment for learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.10.2 Assessment as learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.10.3 Assessment of learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.10.4 Relationship with Formative and Summative . . . . . . . . . . . . . . . . . . . 15
1.10.5 Authentic Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.11 Achievement surveys – [State & National] . . . . . . . . . . . . . . . . . . . . . . . . 16
1.11.1 Online Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.11.2 On demand assessment/ evaluation . . . . . . . . . . . . . . . . . . . . . . . . 18
1.11.3 Focus on Assessment and Evaluation in various Educational Commissions and
NCFs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

1.1 Concept of Test, Measurement, Assessment, Examina-


tion, Appraisal and Evaluation in Education and their
Inter-relationships
1.1.1 Meaning of Assessment
In education, the term assessment refers to the wide variety of methods that educators use to eval-
uate, measure, and document the academic readiness, learning progress, and skill acquisition of
students from preschool through college and adulthood. It is the process of systematically gathering
information as part of an evaluation. Assessment is carried out to see what children and young
people know, understand and are able to do. Assessment is very important for tracking progress,
planning next steps, reporting and involving parents, children and young people in learning.

1.1.2 Meaning of Measurement


Measurement is actually the process of estimating the values that is the physical quantities like; time,
temperature, weight, length etc. each measurement value is represented in the form of some standard
units. The estimated values by these measurements are actually compared against the standard
quantities that are of same type. Measurement is the assignment of a number to a characteristic of
an object or event, which can be compared with other objects or events. The scope and application
of a measurement is dependent on the context and discipline.

1.1.3 Meaning of Tests


A procedure intended to establish the quality, performance or reliability of something, especially
before it is taken into widespread use.

1.1.4 Meaning of Examination


The act of examining or state of being examined (Education). Written exercises, Oral questions or
practical tasks, set to test a candidate’s knowledge and skill (as modifier): an examination paper.

1.1.5 Meaning of Appraisal


An assessment or estimation of the worth, value or quality of a person or thing. Impartial analysis
and evaluation conducted according to established criteria to determine the acceptability, merit, or
worth of an item.

2
1.1.6 Meaning of Evaluation
Evaluation is a broader term that refers to all of the methods used to find out what happens as
a result of using a specific intervention or practice. Evaluation is the systematic assessment of the
worth or merit of some object. It is the systematic acquisition and assessment of information to
provide useful feedback about some object.

1.1.7 Interrelation among Assessment, Evaluation and Measurement


Though the terms assessment and evaluation are often used interchangeably, many writers differen-
tiate between them. Assessment is defined as gathering information or evidence, and evaluation is
the use of that information or evidence to make judgments. Evaluation involves assigning numbers
or scores to an ”attribute or characteristic of a person in such a way that the numbers describe
the degree to which the person possesses the attribute”. Assigning grade equivalents to scores on a
standardized achievement test is an example of measurement.

1.2 Purpose and Objectives of Assessment/ Evaluation – for


Placement, Providing Feedbacks, Grading Promotion,
Certification, Diagnostic of Learning Difficulties
1.2.1 Purpose of Assessment
Assessment drives instruction

A pre-test or needs assessment informs instructors what students know and do not know at the
outset, setting the direction of a course. If done well, the information garnered will highlight the gap
between existing knowledge and a desired outcome. Accomplished instructors find out what students
already know, and use the prior knowledge as a stepping off place to develop new understanding.
The same is true for data obtained through assessment done during instruction. By checking in with
students throughout instruction, outstanding instructors constantly revise and refine their teaching
to meet the diverse needs of students.

Assessment drives learning

What and how students learn depends to a major extent on how they think they will be assessed.
Assessment practices must send the right signals to students about what to study, how to study, and
the relative time to spend on concepts and skills in a course. Accomplished faculty communicate
clearly what students need to know and be able to do, both through a clearly articulated syllabus,
and by choosing assessments carefully in order to direct student energies. High expectations for
learning result in students who rise to the occasion

Assessment informs students of their progress

Effective assessment provides students with a sense of what they know and don’t know about a
subject. If done well, the feedback provided to students will indicate to them how to improve their
performance. Assessments must clearly match the content, the nature of thinking, and the skills
taught in a class. Through feedback from instructors, students become aware of their strengths and
challenges with respect to course learning outcomes. Assessment done well should not be a surprise
to students.

3
Assessment informs teaching practice
Reflection on student accomplishments offers instructors insights on the effectiveness of their teaching
strategies. By systematically gathering, analysing, and interpreting evidence we can determine how
well student learning matches our outcomes / expectations for a lesson, unit or course. The knowledge
from feedback indicates to the instructor how to improve instruction, where to strengthen teaching,
and what areas are well understood and therefore may be cut back in future courses.

Role of grading in assessment


Grades should be a reflection of what a student has learned as defined in the student learning
outcomes. They should be based on direct evidence of student learning as measured on tests, papers,
projects, and presentations, etc. Grades often fail to tell us clearly about “large learning” such as
critical thinking skills, problem solving abilities, communication skills (oral, written and listening),
social skills, and emotional management skills.

When student learning outcomes are not met


Accomplished faculty focus on the data coming out of the assessments they complete before, during
and at the end of a course, and determine the degree to which student learning outcomes are or are
not met. If students are off course early on, a redirecting, re-teaching of a topic, referral to student
learning centres, or review sessions by the instructor may remediate the problem.

Through careful analysis it is possible to determine the challenges and weaknesses of instruction
in order to support student learning better. Some topics or concepts are notoriously difficult, and
there may be a better approach to use. Perhaps a model, simulation, experiment, example or illus-
tration will clarify the concept for students. Perhaps spending a bit more time, or going over a topic
in another way will make a difference. If the problem is noticed late in the course, an instructor may
plan to make any instructional changes for the next time the course is taught, but it is helpful to
make a note of the changes needed at the time so that the realization is not lost.

1.2.2 Importance of Evaluation


1. Diagnostic : Evaluation is a continuous and comprehensive process. It helps the teacher in
finding out the problems, it helps a teacher in removing the problem of his students.
2. Remedial : By remedial work we mean, the proper solution after identifying the problems. A
teacher can give a proper solution for a desirable change in learners’ behaviour and to develop
his/her personality.
3. To clarify the objectives of education : An another importance to clarify the objectives
of education. The objective of education is to change in learner’s behaviour. By evaluation, a
teacher can prove of change to learner’s behaviour.
4. It provides Guidance : If a teacher has the proper knowledge and about his learners only
than he can guide him. And guidance can only after proper evaluation which involves all
dimensions’ abilities, aptitude, interest, and intelligence etc.
5. Helpful in classification : Evaluation is a source by which a teacher knows the various levels
of his students as intelligence, ability, and interest on this basis he can classify his students and
provide them guidance.
6. Helpful in Improvement of Teaching and Learning process : By evaluation is a teacher
could not only improve the personality and learner but he is also able to know the level of
his teaching and can improve it. Thus it is helpful in improvement of teaching and learning
process.

4
1.3 Importance of assessment & evaluation for quality edu-
cation – as a tool in Pedagogic decision making
Performance in schools is increasingly judged on the basis of effective learning outcomes. Information
is critical to knowing whether the school system is delivering good performance and to providing
feedback for improvement in student outcomes. Countries use a range of techniques for the evaluation
and assessment of students, teachers, schools and education systems. Many countries test samples
and/or all students at key points, and sometimes follow students over time.

1. Student Assessment : Several common policy challenges arise concerning student assess-
ment: aligning educational standards and student assessment; balancing external assessments
and teacher-based assessments in the assessment of learning and integrating student formative
assessment in the evaluation and assessment framework.

2. Teacher Evaluation : Common policy challenges in teacher evaluation are: combining the
improvement and accountability functions of teacher evaluation; accounting for student results
in evaluation of teachers; and using teacher evaluation results to shape incentives for teachers.
School Evaluation : School evaluation presents common policy challenges concerning align-
ing external evaluation of schools with internal school evaluation; providing balanced public
reporting on schools and improving data handling skills of school agents.

3. System Evaluation : Common policy challenges for evaluation of education systems are :
meeting information needs at system level; monitoring key outcomes of the education system;
and maximising the use of system-level information.

1.4 Forms of Assessment


1.4.1 Prognostic Assessment
A prognostic assessment expands the findings of an assessment with analysis of abilities and po-
tentials with a further dimension: the future development of the concerned person, as well as the
necessary conditions, timeframe and limits. Finding the right person for an executive position needs
a reliable comprehension of the personality as well as the possibilities and limits concerning the
personal development. Even an experienced and keen observer of human nature may get deluded,
even recognized and proven test procedures may be incomplete or leading to wrong results – and
misjudgements can become expensive in substantial and immaterial ways.

Six Goals of the Prognostic Personality and Abilities Assessment


Analysis of existing abilities and interests, including the not (yet) known ones and the development
to be expected.

1. If needed, a comparison with job description and profile of requirements.

2. Basic conditions and needs for the development : How it can be enhanced and ensured.

3. Period : How long the development will take until the defined goals can be reached.

4. Limits of developmental possibilities, either referring to the defined goals (selection assessment),
or generally, with a realistic time frame of 3 to 5 years.

5. Quality assurance and sustainability : How the results can be monitored and ensured in
the long term.

5
The prognostic assessment is suitable for all management levels including executive board and admin-
istrative council, but likewise for young people with the aim of a comprehensive potential analysis.
Typically, the prognostic assessment is accomplished as an individual one-day-assessment. The ob-
jectives are defined individually.

1.4.2 Formative Assessment


Formative evaluation is used to monitor the learning progress of students during the period of instruc-
tion. Its main objective is to provide continuous feedback to both teacher and student concerning
learning successes and failures while instruction is in process. Feedback to students provides re-
inforcement of successful learning and identifies the specific learning errors that need correction.
Feedback to teacher provides information for modifying instruction and for prescribing group and
individual remedial work.

Formative evaluation helps a teacher to ascertain the pupil progress from time to time. At the
end of a topic or unit or segment or a chapter the teacher can evaluate the learning outcomes basing
on which he can modify his methods, techniques and devices of teaching to provide better learning
experiences. Formative evaluation also provides feedback to pupils. The pupil knows his learning
progress from time to time. Thus, formative evaluation motivates the pupils for better learning. As
such, it helps the teacher to take appropriate remedial measures.

“The idea of generating information to be used for revising or improving educational


practices is the core concept of formative evaluation.”

It is concerned with the process of development of learning. In the sense, evaluation is concerned
not only with the appraisal of the achievement but also with its improvement. Formative evaluation
is generally concerned with the internal agent of evaluation, like participation of the learner in the
learning process.

The functions of formation evaluation are


1. Diagnosing : Diagnosing is concerned with determining the most appropriate method or
instructional materials conducive to learning.

2. Placement : Placement is concerned with the finding out the position of an individual in the
curriculum from which he has to start learning.

3. Monitoring : Monitoring is concerned with keeping track of the day-to- day progress of the
learners and to point out changes necessary in the methods of teaching, instructional strategies,
etc.

Characteristics of Formative Evaluation


1. It is an integral part of the learning process.

2. It occurs, frequently, during the course of instruction.

3. Its results are made immediately known to the learners.

4. It may sometime take form of teacher observation only.

5. It reinforces learning of the students.

6. It pinpoints difficulties being faced by a weak learner.

6
7. Its results cannot be used for grading or placement purposes.

8. It helps in modification of instructional strategies including method of teaching, immediately.

9. It motivates learners, as it provides them with knowledge of progress made by them.

10. It sees role of evaluation as a process.

11. It is generally a teacher-made test.

12. It does not take much time to be constructed.

Examples
1. Monthly tests.

2. Class tests.

3. Periodical assessment.

4. Teacher’s observation, etc.

1.4.3 Diagnostic Assessment


It is concerned with identifying the learning difficulties or weakness of pupils during instruction. It
tries to locate or discover the specific area of weakness of a pupil in a given course of instruction and
also tries to provide remedial measure. When the teacher finds that in spite of the use of various
alternative methods, techniques and corrective prescriptions the child still faces learning difficulties,
he takes recourse to a detailed diagnosis through specifically designed tests called ‘diagnostic tests’.
Diagnosis can be made by employing observational techniques, too. In case of necessity the services
of psychological and medical specialists can be utilised for diagnosing serious learning handicaps.

1.4.4 Summative Assessment


Summative evaluation is done at the end of a course of instruction to know to what extent the
objectives previously fixed have been accomplished. In other words, it is the evaluation of pupils’
achievement at the end of a course. The main objective of the summative evaluation is to assign
grades to the pupils. It indicates the degree to which the students have mastered the course content.
It helps to judge the appropriateness of instructional objectives. Summative evaluation is generally
the work of standardised tests. It tries to compare one course with another.

The approaches of summative evaluation imply some sort of final comparison of one item or cri-
teria against another. It has the danger of making negative effects. This evaluation may brand a
student as a failed candidate, and thus causes frustration and setback in the learning process of the
candidate, which is an example of the negative effect. The traditional examinations are generally
summative evaluation tools. Tests for formative evaluation are given at regular and frequent intervals
during a course; whereas tests for summative evaluation are given at the end of a course or at the
end of a fairly long period (say, a semester).

The functions of this type of evaluation are


1. Crediting : Crediting is concerned with collecting evidence that a learner has achieved some
instructional goals in contents in respect to a defined curricular programme.

2. Certifying : Certifying is concerned with giving evidence that the learner is able to perform
a job according to the previously determined standards.

7
3. Promoting : It is concerned with promoting pupils to next higher class.
4. Selecting : Selecting the pupils for different courses after completion of a particular course
structure.

Characteristics of Summative Evaluation


1. It is terminal in nature as it comes at the end of a course of instruction (or a programme).
2. It is judgemental in character in the sense that it judges the achievement of pupils.
3. It views evaluation “as a product”, because its chief concern is to point out the levels of
attainment.
4. It cannot be based on teachers’ observations only.
5. It does not pin-point difficulties faced by the learner.
6. Its results can be used for placement or grading purposes.
7. It reinforces learning of the students who has learnt an area.
8. It may or may not motivate a learner. Sometimes, it may have negative effect.

Examples
1. Traditional school and university examination
2. Teacher-made tests
3. Standardised tests
4. Practical and oral tests
5. Rating scales

1.4.5 Placement
Placement evaluation is designed to place the right person in the right place. It ensures the entry
performance of the pupil. The future success of the instructional process depends on the success
of placement evaluation. Placement evaluation aims at evaluating the pupil’s entry behaviour in a
sequence of instruction. In other words, the main goal of such evaluation is to determine the level
or position of the child in the instructional sequence. We have a planned scheme of instruction for
classroom which is supposed to bring a change in pupil’s behaviour in an orderly manner. Then we
prepare or place the students for planned instruction for their better prospects.

When a pupil is to undertake a new instruction, it is essential to know


the answer of the following questions
1. Does the pupil possess required knowledge and skills for the instruction?
2. Whether the pupil has already mastered some of the instructional objectives or not?
3. Whether the mode of instruction is suitable to pupil’s interests, work habits and personal
characteristics?
Sometimes past experiences, which inspire for present learning also lead to the further placement in
a better position or admission. This type of evaluation is helpful for admission of pupils into a new
course of instruction.

8
Examples
1. Aptitude test
2. Self-reporting inventories
3. Observational techniques
4. Medical entrance exam.
5. Engineering or Agriculture entrance exam.

1.4.6 Norm Referenced


Norm-referenced evaluation is the traditional class-based assignment of numerals to the attribute
being measured. It means that the measurement act relates to some norm, group or a typical perfor-
mance. It is an attempt to interpret the test results in terms of the performance of a certain group.
This group is a norm group because it serves as a referent of norm for making judgements. Test
scores are neither interpreted in terms of an individual (self-referenced) nor in terms of a standard
of performance or a pre-determined acceptable level of achievement called the criterion behaviour
(criterion-referenced).

The measurement is made in terms of a class or any other norm group. Almost all our class-
room tests, public examinations and standardised tests are norm-referenced as they are interpreted
in terms of a particular class and judgements are formed with reference to the class.

Examples
1. Raman stood first in Mathematics test in his class.
2. The typist who types 60 words per minute stands above 90 percent of the typists who appeared
the interview.
3. Amit surpasses 65% of students of his class in reading test.

A simple working definition


A norm-referenced test is used to ascertain an individual’s status with respect to the performance of
other individuals on that test.

In the above examples, the person’s performance is compared to others of their group and the
relative standing position of the person in his/her group is mentioned. We compare an individual’s
performance with similar information about the performance of others.

That is why selection decisions always depend on norm- referenced judgements. A major requirement
of norm-referenced judgements is that individuals being measured and individuals forming the group
or norm, are alike. In norm-referenced tests very easy and very difficult items are discarded and
items of medium difficulty are preferred because our aim is to study relative achievement.

1.4.7 Criterion referenced based on purpose


When the evaluation is concerned with the performance of the individual in terms of what he can do
or the behaviour he can demonstrate, is termed as criterion- referenced evaluation. In this evaluation
there is a reference to a criterion. But there is no reference to the performance of other individuals
in the group. In it we refer an individual’s performance to a predetermined criterion which is well
defined.

9
Examples
1. Raman got 93 marks in a test of Mathematics.

2. A typist types 60 words per minute.

3. Amit’s score in a reading test is 70.

A simple working definition


A criterion-referenced test is used to ascertain an individual’s status with respect to a defined achieve-
ment domain.

In the above examples there is no reference to the performance of other members of the group.
Thus criterion-referenced evaluation determines an individual’s status with reference to well defined
criterion behaviour.

It is an attempt to interpret test results in terms of clearly defined learning outcomes which serve
as referents (criteria). Success of criterion-reference test lies in the delineation of all defined levels of
achievement which are usually specified in terms of behaviourally stated instructional objectives.

The purpose of criterion-referenced evaluation/test is to assess the objectives. It is the objective


based test. The objectives are assessed, in terms of behavioural changes among the students. Such
type of test assesses the ability of the learner in relation to the criterion behaviour. Glasar (1963)
first used this term, ‘Criterion-reference test’ to describe the learner’s achievement on a performance
continuum.

Hively and Millman (1974) suggested a new term, ‘domain-referenced test’ and to them the word
‘domain’ has a wider connotation. A criterion referenced test can measure one or more assessment
domain.

1.5 Teacher made tests, Standardized tests: based on na-


ture & scope
The standardized test is based on the general content and objectives common to many schools all
over the country whereas the teacher made test can be adapted to content and objectives specific to
his own situation. The standardized test deals with large segments of knowledge or skill whereas the
teacher made test can be prepared in relation to any specific limited topic.

The standardized test is developed with the help of professional writers, reviewers and editors of
tests items whereas the teacher made test usually relies upon the skill of one or two teachers. The
standardized test provides norms for various groups that are broadly representative of performance
throughout the country whereas the teacher made test lack this external point of reference.

1.6 Oral and Written performance: based on mode of re-


sponse
Student oral responses are longer and more complex, parallel to extended written response ques-
tions. Just as with extended written response, one evaluates the quality of oral responses using a
rubric or scoring guide. Longer, more complicated responses would occur, for example, during oral
examination or oral presentations. Written assessments are activities in which the student selects or

10
composes a response to a prompt.

In most cases, the prompt consists of printed materials (a brief question, a collection of histori-
cal documents, graphic or tabular material, or a combination of these). However, it may also be an
object, an event, or an experience. Student responses are usually produced —on demand i.e., the
respondent does the writing at a specified time and within a fixed amount of time. These constraints
contribute to standardization of testing conditions, which increases the comparability of results across
students or groups.

1.7 Based on Context


1.7.1 Internal Assessment
Internal assessment can be due at different times throughout the semester and is managed by the
individual lecturer. The internal assessment is what you do as part of your coursework - the essays,
group assignments, tests, etc.

1.7.2 External Assessment


Formal examinations (external assessment) are managed centrally by the Assessment Office within
Student Services and are held at the end of each semester. External assessment refers to the exami-
nation, which is usually taken in the exam period once your lectures and workshops are finished.

1.7.3 Self-Assessment
Once learners are able to use the assessment criteria appropriately and can actively contribute to
peer-assessment activities, the next step is to engage them in self-assessment tasks. Self-assessment
is a very powerful teaching tool and crucial to the Assessment for Learning process.

s. Once learners can engage in peer assessment activities, they will be able to apply these new
skills to undertaking ‘objective’ assessment of their own work. We all know it is easy to find fault in
other people’s work, but it is a far more challenging process to judge one’s own work. Once learners
can assess their own work and their current knowledge base, they will be able to identify the gap in
their own learning; this will aid learning and promote progress and contribute to the self-management
of learning.

Teachers need :

1. provide opportunities for learners to reflect on their own work

2. ensure they provide individuals with the necessary support so that they are able to acknowledge
shortcomings in their own work

3. support learners through the self-assessment process so that strengths in their work are fully
recognized and weaknesses are not exaggerated to the point that they damage learners’ self-
esteem.

1.7.4 Peer Assessment


: It is widely recognized that when learners are fully engaged in the learning process, learning in-
creases. A fundamental requirement of Assessment for Learning is for learners to know what they
have to learn, why it is required and how it is to be assessed.

11
. When learners are able to understand the assessment criteria, progress is often maximized, es-
pecially when individuals have opportunities to apply the assessment criteria to work produced by
their peers as part of planned classroom activities. Peer assessment using the predefined assessment
criteria is the next stage to evaluate learner understanding and consolidating learning.

Benefits of organizing peer assessment activities :

1. learners clarifying their own ideas and understanding of the learning intention

2. Checking individuals’ understanding of the assessment criteria and how it is to be applied to


learners’ work.

1.7.5 Group Assessment


Group work is a method of instruction that gets students to work together. There are various benefits
and challenges that come with preparing, developing and facilitating group work with teaching and
learning practices. As an assessment task, groups often develop or create a product or piece of work
to demonstrate learning and understanding of a particular concept.

The assessment may be on the final product or understanding, or on the process of developing that
product or understanding. Whilst the benefits of group work are well documented, the challenges of
allocating marks and feedback to individuals within that group can be a challenge.

1.8 Based on nature of information gathered


1.8.1 Quantitative Research
Quantitative research is perhaps the simpler to define and identify. The data produced are always
numerical, and they are analysed using mathematical and statistical methods. If there are no num-
bers involved, then it’s not quantitative research. Some phenomena obviously lend themselves to
quantitative analysis because they are already available as numbers. Examples include changes in
achievement at various stages of education, or the increase in number of senior managers holding
management degrees. However, even phenomena that are not obviously numerical in nature can be
examined using quantitative methods.

The most common sources of quantitative data include :


1. Surveys, whether conducted online, by phone or in person. These rely on the same questions
being asked in the same way to a large number of people;

2. Observations, which may either involve counting the number of times that a particular phe-
nomenon occurs, such as how often a particular word is used in interviews, or coding observa-
tional data to translate it into numbers

3. Secondary data, such as company accounts.

1.8.2 Qualitative Research


Qualitative research is any which does not involve numbers or numerical data. It often involves words
or language, but may also use pictures or photographs and observations. Qualitative analysis results
in rich data that gives an in-depth picture and it is particularly useful for exploring how and why
things have happened.

12
Although qualitative data is much more general than quantitative, there are still a number of common
techniques for gathering it. These include :

1. Interviews, which may be structured, semi-structured or unstructured;

2. Focus groups, which involve multiple participants discussing an issue;

3. ‘Postcards’, or small-scale written questionnaires that ask, for example, three or four focused
questions of participants but allow them space to write in their own words;

4. Secondary data, including diaries, written accounts of past events, and company reports;

5. Observations, which may be on site, or under laboratory conditions’, for example, where par-
ticipants are asked to role-play a situation to show what they might do.

1.9 CCE - School Based Assessment


Continuous and Comprehensive Evaluation (CCE) refers to a system of school-based evaluation of
students that covers all aspects of students’ development. It is a developmental process of assess-
ment which emphasizes on two fold objectives. These objectives are continuity in evaluation and
assessment of broad based learning and behavioural outcomes on the other.

The term ‘continuous’ is meant to emphasise that evaluation of identified aspects of students’ ‘growth
and development’ is a continuous process rather than an event, built into the total teaching-learning
process and spread over the entire span of academic session. It means regularity of assessment, fre-
quency of unit testing, diagnosis of learning gaps, use of corrective measures, retesting and for their
self-evaluation.

The second term ‘comprehensive’ means that the scheme attempts to cover both the scholastic
and the co-scholastic aspects of students’ growth and development. Scholastic aspects include curric-
ular areas or subject specific areas, whereas co-scholastic aspects include Life Skills, Co-Curricular,
attitudes, and values.

The scheme is thus a curricular initiative, attempting to shift emphasis from testing to holistic
learning. It aims at creating good citizens possessing sound health, appropriate skills and desir-
able qualities besides academic excellence. It is hoped that this will equip the learners to meet the
challenges of life with confidence and success.

1.9.1 School Based Assessment (SBA)


The term school based assessment may be defined as :

• Assessment that facilitates attainment of competencies specified in terms of learning outcomes


in a holistic manner during teaching learning process.

• Assessment embedded in the teaching and learning process within the broader educational
philosophy of ‘assessment for learning’.

• Assessment of school students by school teachers in the schools.

Salient features of SBA


1. Integrate teaching-learning and assessment

2. No load on teachers of documentation- recording, reporting

13
3. Child-centered and activity based pedagogy
4. Focus on (learning-outcome based) competency development rather than content memorisation
5. Broadening the scope of assessment by way of including self-assessment, peer-assessment besides
teacher assessment
6. Non-threatening, stress free and enhanced participation/ interaction
7. Focus on assessment of/and/as learning rather than evaluation of achievement
8. Reposing faith on teacher and the system
9. Enhancing self confidence in children

1.9.2 Standard Based Assessment


Standards-based assessment depends on a set of pre-defined statements outlining different levels or
standards of achievement in a program, course or assessment component, and normally expressed in
terms of the stated assessment criteria.

This system of assessment involves awarding grades to students to reflect the level of performance (or
standard) they have achieved relative to the pre-defined standards. Students’ grades, therefore, are
not determined in relation to the performance of others, or to a pre-determined distribution of grades.

Standards-based assessment lets students know against which criteria you will judge their work,
and the standards attached to each of these criteria. It tells students what performance is required
and allows you to make comparisons between students based on their achievement of the standards.
Standards should be clear, straight-forward, observable, measurable, and wellarticulated. The stan-
dards guide in creating experiences to enable our students to know how, when and why to say what
to whom.

Types of standards
1. Content standard: are statements about what learners should know and be able to do.
2. Performance standard: shows us how the learners achieve the standards targeted. They refer
to how learners are meeting a standard and show the learner’s progress towards meeting a
standard.
3. Proficiency standard: these standards tell us how well learners should perform.

1.10 Recent trends in Assessment and Evaluations


Assessment plays a major role in how students learn, their motivation to learn, and how teachers
teach. Assessment is used for various purposes.

1.10.1 Assessment for learning


Where assessment helps teachers gain insight into what students understand in order to plan and
guide instruction, and provide helpful feedback to students.

1.10.2 Assessment as learning


Where students develop an awareness of how they learn and use that awareness to adjust and advance
their learning, taking an increased responsibility for their learning.

14
1.10.3 Assessment of learning
Where assessment informs students, teachers and parents, as well as the broader educational com-
munity, of achievement at a certain point in time in order to celebrate success, plan interventions
and support continued progress.

Assessment must be planned with its purpose in mind. Assessment for, as and of learning all have a
role to play in supporting and improving student learning, and must be appropriately balanced. The
most important part of assessment is the interpretation and use of the information that is gleaned
for its intended purpose. Assessment is embedded in the learning process.

It is tightly interconnected with curriculum and instruction. As teachers and students work towards
the achievement of curriculum outcomes, assessment plays a constant role in informing instruction,
guiding the student’s next steps, and checking progress and achievement. Teachers use many dif-
ferent processes and strategies for classroom assessment, and adapt them to suit the assessment
purpose and needs of individual students. Research and experience show that student learning is
best supported when

1. Instruction and assessment are based on clear learning goals

2. Instruction and assessment are differentiated according to student learning needs

3. Students are involved in the learning process (they understand the learning goal and the cri-
teria for quality work, receive and use descriptive feedback, and take steps to adjust their
performance)

4. Assessment information is used to make decisions that support further learning

5. Parents are well informed about their child’s learning, and work with the school to help plan
and provide support

6. Students, families, and the general public have confidence in the system

1.10.4 Relationship with Formative and Summative


The instructional programme in formative assessment is still following but in summative assessment
is not following in most of cases.

1. The formative assessment is to develop not for judgments in nature as the summative assessment
judges the merit of instructional sequences.

2. Formative assessment is the assessment made during the instructional phase about progress in
learning but the summative assessment is the terminal assessment of performance at the end
of instruction.

3. Formative assessment the scores of individual pattern of pass-fail whereas in summative assess-
ment report is given in terms of total scores.

4. Formative assessment content focus is detailed and it is narrow and in summative assessment
content is general and broad.

5. In formative assessment process is given in daily assignments in observation method but in


summative assessment process in projects and test.

15
1.10.5 Authentic Assessment
Authentic assessment (AA) springs from the following reasoning and practice :

1. A school’s mission is to develop productive citizens.

2. To be a productive citizen, an individual must be capable of performing meaningful tasks in


the real world.

3. Therefore, schools must help students become proficient at performing the tasks they will
encounter when they graduate.

4. To determine if it is successful, the school must then ask students to perform meaningful tasks
that replicate real world challenges to see if students are capable of doing so.

Thus, in AA, assessment drives the curriculum. That is, teachers first determine the tasks that stu-
dents will perform to demonstrate their mastery, and then a curriculum is developed that will enable
students to perform those tasks well, which would include the acquisition of essential knowledge and
skills. This has been referred to as planning backwards.

If I were a golf instructor and I taught the skills required to perform well, I would not assess my
students’ performance by giving them a multiple choice test. I would put them out on the golf
course and ask them to perform. Although this is obvious with athletic skills, it is also true for
academic subjects. We can teach students how to do math, do history and do science, not just know
them. Then, to assess what our students had learned, we can ask students to perform tasks that
”replicate the challenges” faced by those using mathematics, doing history or conducting scientific
investigation.

Traditional Authentic
Selecting a Response Performing a Task
Contrived Real-life
Recall/Recognition Construction/Application
Teacher-structured Student-structured
Indirect Evidence Direct Evidence

1.11 Achievement surveys – [State & National]


India has made a significant investment in its education. The government’s flagship programme
Sarva Shiksha Abhiyan (SSA) is designed to ensure access, equity and quality in elementary educa-
tion. The nation now needs reliable information about students’ achievement in order to judge the
quality of education provided. The history of NAS Carried out as part of SSA, the NAS aims to
collect reliable information about the achievement levels of students in government and government-
aided elementary schools.

In 2000, NCERT’s NAS programme was incorporated into the SSA programme. The plan was
to carry out three NAS cycles, each cycle covering three key grades :

1. Class III

2. Class V

3. Class VII/VIII

All three Classes are tested in mathematics and language.

16
In Class V, students are also tested in environmental studies (EVS), while Class VII/VIII com-
pletes tests in science and social science. The Baseline Achievement Survey (BAS) was carried out
in 2001- 2004, followed by the Midterm Achievement Survey (MAS) in 2005-2008. The experience
gained through these initial two cycles made the value of the NAS clear, and the surveys were made
an ongoing feature of the national education system. To mark this shift from stand-alone surveys to
continuous assessment, the Terminal Achievement Survey (TAS) has been renamed ‘Cycle 3’.

Measuring progress in education the NAS is a useful tool for teachers and policymakers alike –
to establish what students are achieving in core subjects and to identify any areas of concern. By
repeating the NAS at regular intervals, the data can be used to measure trends in education achieve-
ment levels and measure progress made by SSA and other education reforms.

1.11.1 Online Assessment


Online assessment is the process used to measure certain aspects of information for a set purpose
where the assessment is delivered via a computer connected to a network. Most often the assessment
is some type of educational test. Different types of online assessments contain elements of one or
more of the following components, depending on the assessment’s purpose: formative, diagnostic, or
summative

Instant and detailed feedback, as well as flexibility of location and time, is just two of the many
benefits associated with online assessments. There are many resources available that provide online
assessments, some free of charge and others that charge fees or require a membership. The online ex-
amination system not only reflects the justification and objectivity of examination, but also releases
the workload of teachers, which is accepted by more and more schools, certification organizations and
training organizations. Most online examination systems only support several fixed question types
and don’t allow users define their own question types, so they have pool scalability.

This paper proposes a new online examination system, which not only provides several basic question
types but also allows users to define their new question types (user-defined question type) through
composing of basic question types and/or user-defined question types, which is realized based on the
object-oriented conception and composite design pattern. The new online examination system over-
comes the shortcoming of old online examination systems and has better extensibility and flexibility.

Types of Online Examination


Online examination is used primarily to measure cognitive abilities, demonstrating what has been
learned after a particular educational event has occurred, such as the end of an instructional unit or
chapter. When exam practical abilities or to demonstrate learning that has occurred over a longer
period of time an online portfolio is often used. The first element that must be prepared when
teaching an online course is examination. Examination is used to determine if learning is happening,
to what extent and if changes need to be made.

1. Independent Work : Independent work is work that a student prepares to assist the instruc-
tor in determining their learning progress. Some examples are: exercises, papers, portfolios,
and exams (multiple choice, true false, short answer, fill in the blank, open ended/essay or
matching). To truly evaluate, an instructor must use multiple methods.

2. Group Work : Students are often asked to work in groups. With this brings on new exami-
nation strategies. Students can be evaluated using a collaborative learning model in which the
learning is driven by the students and/or a cooperative learning model where tasks are assigned
and the instructor is involved in decisions.

17
1.11.2 On demand assessment/ evaluation
The scheme of on-demand examination is a comprehensive ICT enabled system of examination
which provides the learners an opportunity to appear in the examination as per their preparation
and convenience. In fact, it is a blended scheme of ICT and traditional examination system wherein
students can walk-in any time at the selected examination centres and take examination. The de-
mand of flexibility in education system and changing profile of learners has necessitated starting such
innovative scheme which has made the existing examination system more flexible and learner-friendly

This is very much successful in distance education system as most of the distance learners in higher
education are working people; they normally do not get leave from their organizations for several
days at a stretch for term end examination, and hence they fail to complete their courses in stipulated
time limit. Most on-demand examination is conducted through ICT. Its objective is to enable the
learners to appear in the examination as per their preparation and convenience on the date and time
of their choice.

The features of on-demand examination system are as follows :

1. Learner-friendly innovative scheme of examination.

2. More flexible and independent of traditional fixed time frame in examination.

3. No need to wait for the six monthly term end examination.

4. Different sets of question papers generated on the day of examination.

5. A particular software specially developed for the purpose are used.

6. Least possibility of malpractices or unfair means.

7. May reduce load on the term-end exam in future.

8. It reduces workload of students, teachers and also the entire system of examination.

On-demand examination makes use of ICT to solve problems which arise due to human limitations.

The major advantages of on-demand examination are as follows :

1. It makes possible instant generation of parallel question papers, and facilitates authorised data
entry at different points, leaving no chance for human error.

2. It has very silently reformed the system of evaluation without making abrupt changes.

3. It is not only simple and user friendly but it is also cost effective and saves time and effort in
setting question papers.

4. It generates individualised and unique question papers on the day of examination by picking
up the questions randomly from the question bank as per the blueprint & design.

5. It removes frustration, loss of self-esteem, and depression that are generally characterized by
the term-end examination

18
1.11.3 Focus on Assessment and Evaluation in various Educational Com-
missions and NCFs
Examinations are an indispensable part of the educational process as some form of assessment is
necessary to determine the effectiveness of teaching learning processes and their internalization by
learners. Various Commissions and Committees have felt the need for examination reforms. The
Hunter Commission (1882), Calcutta University Commission or Sadler Commission (1917-1919),
Hartog Committee Report (1929), the Report of Central Advisory Board / Sargeant Plan (1944),
Secondary Education Commission / Mudaliar Commission (1952-53) have all made recommendations
regarding reducing emphasis on external examination and encouraging internal assessment through
Continuous and Comprehensive Evaluation.

This aspect has been strongly taken care of in the National Policy on Education- 1986 which states
that “Continuous and Comprehensive Evaluation that incorporates both scholastic and non-scholastic
aspects of evaluation, spread over the total span of instructional time”.

Report on the Committee for Review of NPE-1986-recommendation brought out by Government


of India in 1991 lays down norms for “continuous comprehensive internal evaluation and suggests
safeguards against abuse of this evaluation system”. Report on the brought out by MHRD, Govt. of
India in January, 1992 has also referred to the provisions of NPE with regard to evaluation process
and examination reforms and also suggested’ continuous and comprehensive internal evaluation of
the scholastic and non-scholastic achievement of the students’.

Accordingly, National Curriculum Framework - 2005 (NCF-05) proposing Examination Reforms


stated - “Indeed, boards should consider, as a long-term measure, making the Class X examina-
tion optional, thus permitting students continuing in the same school (and who do not need a board
certificate) to take an internal school examination instead”. Further, in NPE (1986) it has been
emphasized that at the school level the evaluation should be formative or developmental in nature
because at this stage child is in the formative stage of learning and thus the emphasis should be on
improvement of learning.

19
Developing Assessment Tools, Techniques & Strategies – I
2
Contents
2.1 Domains of Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.1.1 Cognitive Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.1.2 Affective Attitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.1.3 Psycho-motor Skills . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.1.4 Relationship between Educational objectives, Learning experiences and Eval-
uation: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.2 Revised taxonomy of objectives [2001] and its implications for assessment and stating
the objectives: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2.1 Knowledge dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2.2 Cognitive domain (knowledge-based) . . . . . . . . . . . . . . . . . . . . . . . 24
2.2.3 Stating objectives as learning outcomes - General & Specific: . . . . . . . . . . 27
2.2.4 Construction of Achievement tests – steps, procedure and uses: . . . . . . . . . 27
2.2.5 TYPE OF TEST ITEMS: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.2.6 Construction of Diagnostic test – steps, uses and limitation: . . . . . . . . . . 32
2.3 Remedial Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.3.1 Need . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.3.2 Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.3.3 Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.4 Quality assurance in tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.4.1 Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.4.2 Validity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

2.1 Domains of Learning


In 1956, educational psychologist Dr. Benjamin Bloom created a system to classify learning objec-
tives into a series of learning domains that encourage teachers to think holistically about education.
His system came to be known as Bloom’s Taxonomy. Much has been written about it, and it has
been widely applied, including here at Davenport.

When these learning domain ideas are applied to learning environments, active verbs are used to
describe the kind of knowledge and intellectual engagement we want our students to demonstrate.

Bloom identified three domains, or categories, of educational activities :

20
2.1.1 Cognitive Knowledge
The Cognitive Domain develops six areas of intellectual skills that build sequentially from simple to
complex behaviours. Bloom arranged them this way :

1. Knowledge (recall of information)

2. Comprehension (understanding of meaning)

3. Application (use of concept)

4. Analysis (deconstruction of concept)

5. Synthesis (combination of information to create meaning)

6. Evaluation (judgment of concept)

In time, this arrangement evolved into what we now call Bloom’s Revised Taxonomy. Category
names were changed from nouns to verbs, but are still ordered from simple to complex :

1. Remembering

2. Understanding

3. Applying

4. Analysing

5. Evaluating

6. Creating

2.1.2 Affective Attitude


: The Affective Domain includes five areas of emotional response, categorized as simple to complex
ways of processing feelings and attitude. Bloom arranged them this way :

1. Receiving (passively paying attention)

2. Responding (actively learning and reacting)

3. Valuing (attaching worth to information)

4. Organizing (arranging and elaborating on information)

5. Characterizing (valuing belief that influences behavior)

2.1.3 Psycho-motor Skills


: The Psychomotor Domain, which focuses on physical skills, was identified, but not defined, by Dr.
Bloom. His original ideas were expanded by 1970s educators, including Dr. Elizabeth Simpson, who
developed them in this simple-to-complex order :

1. Perception (sensory guiding of motor activity)

2. Set (feeling ready to act)

3. Guided Response (beginning to learn complex skills)

4. Mechanism (developing basic proficiency)

21
5. Complex Overt Response (performing with advanced skill)

6. Adaptation (modifying movement to meet special circumstances)

7. Origination (creating situation-specific movements)

2.1.4 Relationship between Educational objectives, Learning experi-


ences and Evaluation:
Teaching part includes details about what methodology and tools you use to deliver the content.
You may adopt inductive method and start the topics as a discussion or you may use deductive
method and teach whatever there in syllabus and then have a group discussion. You may use a PPT
or a video resource that could describe the topic in an effective manner.

Learning part includes what methodology and tools students use to learn the topic. They may be
asked to read a book or watch a video and come to class before teaching the first class on the topic
(flipped classroom). They may be asked to prepare for a seminar. One may be focusing on theory,
another in experimental techniques another may be dealing with applications. They may be asked
to prepare for a group discussion or prepare a study report.

Experience part is the most crucial part of the entire educational mechanism. How do you plan
to give an experiential learning to the students? You may ask them to read a book and write the
gist. Ask them to watch a video and write a report. Ask them to develop a model, ask them to play
around an interactive animation and write their observations, take them to an industry or research
lab and ask them to write their experiences.
Without experience there won’t be any learning. Students will forget things very easily. Somehow
we need an active (not passive) participation of students in the learning process.
Preparing students just to remember things is the first stage of learning and there are 5 more stages
without which learning will not be complete. That is what described in Bloom’s taxonomy modified
version.

Evaluation part come when you want to rate the learning level of students.

Are you trying to test the remembering level of students, or understanding, applying, analysing,evaluating,
creating?

Remembering part is to test “what” is there in text book.

Understanding part tests “why” something is presented in such a manner in text book.

Application part tests the application of equations derived or concepts developed, blindly to some
problems and get results.

Analysis part cross checks the validity of equations or concepts developed. What if this is not
there what if it is in an other way around etc.

So understanding part deals with where a particular equation or concept works and analysis part
deals with where it fails.
Then comes the evaluation part where student will be given a situation and apply the concepts
learned to predict the outcome. They should be able to give reasons why something works or why
something fails in the given situation. So they have to cross both understanding and analysis stages
to judge a case or a system or a situation.
Then comes the creation part where student will be asked to create something new that answers

22
the problems identified in analysis level. It is not just doing something new. Then the results may
be presented as a study report or a project report.

2.2 Revised taxonomy of objectives [2001] and its implica-


tions for assessment and stating the objectives:

2.2.1 Knowledge dimensions

23
2.2.2 Cognitive domain (knowledge-based)
In the 1956 original version of the taxonomy, the cognitive domain is broken into the six levels
of objectives listed below.In the 2001 revised edition of Bloom’s taxonomy, the levels have slightly
different names and the order is revised: Remember, Understand, Apply, Analyze, Evaluate, and
Create (rather than Synthesize).

Knowledge
Knowledge involves recognizing or remembering facts, terms, basic concepts, or answers without
necessarily understanding what they mean. Its characteristics may include:

• Knowledge of specifics—terminology, specific facts.

• Knowledge of ways and means of dealing with specifics—conventions, trends and sequences,classifications
and categories.

• Knowledge of the universals and abstractions in a field—principles and generalizations,theories


and structures.

Comprehension
Comprehension involves demonstrating an understanding of facts and ideas by organizing,summarizing,
translating, generalizing, giving descriptions, and stating the main ideas.

Application
Application involves using acquired knowledge—solving problems in new situations by applying ac-
quired knowledge, facts, techniques and rules. Learners should be able to use prior knowledge to
solve problems, identify connections and relationships and how they apply in new situations.

Analysis
Analysis involves examining and breaking information into component parts, determining how the
parts relate to one another, identifying motives or causes, making inferences, and finding evidence
to support generalizations. Its characteristics include

• Analysis of elements

• Analysis of relationships

• Analysis of organization

Synthesis
Synthesis involves building a structure or pattern from diverse elements; it also refers to the act of
putting parts together to form a whole. Its characteristics include:

• Production of a unique communication

• Production of a plan, or proposed set of operations

• Derivation of a set of abstract relations

24
Evaluation
Evaluation involves presenting and defending opinions by making judgments about information,the
validity of ideas, or quality of work based on a set of criteria. Its characteristics include:

• Judgments in terms of internal evidence

• Judgments in terms of external criteria

The Affective Domain (emotion-based)


Skills in the affective domain describe the way people react emotionally and their ability to feel other
living things’ pain or joy. Affective objectives typically target the awareness and growth in attitudes,
emotion, and feelings.
There are five levels in the affective domain moving through the lowest-order processes to the highest.

Receiving
The lowest level; the student passively pays attention. Without this level, no learning can occur.
Receiving is about the student’s memory and recognition as well.

Responding
The student actively participates in the learning process, not only attends to a stimulus; the student
also reacts in some way.

Valuing
The student attaches a value to an object, phenomenon, or piece of information. The student
associates a value or some values to the knowledge they acquired.

Organizing
The student can put together different values, information, and ideas, and can accommodate them
within his/her own schema; the student is comparing, relating and elaborating on what has been
learned.

Characterizing
The student at this level tries to build abstract knowledge.

The psychomotor domain (action-based)


Skills in the psychomotor domain describe the ability to physically manipulate a tool or instrument
like a hand or a hammer. Psychomotor objectives usually focus on change and/or development
in behavior and/or skills. Bloom and his colleagues never created subcategories for skills in the
psychomotor domain, but since then other educators have created their own psychomotor taxonomies.
Simpson(1972) proposed the following levels:

Perception
The ability to use sensory cues to guide motor activity: This ranges from sensory stimulation,through
cue selection, to translation.

25
Key words:
chooses, describes, detects, differentiates, distinguishes, identifies, isolates, relates,selects.

Set
Readiness to act: It includes mental, physical, and emotional sets. These three sets are dispositions
that predetermine a person’s response to different situations (sometimes called mindsets). This
subdivision of psychomotor is closely related with the ”responding to phenomena” subdivision of the
affective domain.

Keywords:
begins, displays, explains, moves, proceeds, reacts, shows, states, volunteers.

Guided response
The early stages of learning a complex skill that includes imitation and trial and error: Adequacy of
performance is achieved by practicing.

Keywords:
copies, traces, follows, reacts, reproduces, responds.

Mechanism
The intermediate stage in learning a complex skill: Learned responses have become habitual and the
movements can be performed with some confidence and proficiency.

Key words:
assembles, calibrates,constructs, dismantles, displays, fastens, fixes,grinds, heats,manipulates, mea-
sures, mends, mixes, organizes, sketches.

Complex overt response


The skillful performance of motor acts that involve complex movement patterns: Proficiency is
indicated by a quick, accurate, and highly coordinated performance, requiring a minimum of energy.
This category includes performing without hesitation and automatic performance. For example,
players will often utter sounds of satisfaction or expletives as soon as they hit a tennis ball or throw
a football because they can tell by the feel of the act what the result will produce.

Key words:
assembles, builds, calibrates, constructs, dismantles, displays, fastens, fixes, grinds, heats, manip-
ulates, measures, mends, mixes, organizes, sketches. (Note: The key words are the same as in
mechanism, but will have adverbs or adjectives that indicate that the performance is quicker, better,
more accurate, etc.)

Adaptation
Skills are well developed and the individual can modify movement patterns to fit special requirements.

26
Key words:
adapts, alters, changes, rearranges, reorganizes, revises, varies.

Origination
Creating new movement patterns to fit a particular situation or specific problem: Learning outcomes
emphasize creativity based upon highly developed skills.

Key words:
arranges, builds, combines, composes, constructs, creates, designs, initiates, makes, originates.

2.2.3 Stating objectives as learning outcomes - General & Specific:


General objective:
There is usually only one, as it encompasses the entirety of an investigation or a project, and it is the
primary goal to be achieved, that towards which all the efforts of an organization or all the chapters
of a thesis grade contribute, for example.

Specific objectives:
There are usually several, since each segment of an organization or each chapter of an investigation
has its own goal to be achieved, which is under-edited or contained in the general objective.
Thus, the sum of all the specific objectives would have to meet the general objective as
a result, since the latter include the steps that must be taken first (and often in succession or in an
organized way) to reach the top of the ladder.

GENERAL OBJECTIVES SPECIFIC OBJECTIVES


Summarize and present the central Present in detail the results that are intended
Function
idea of an academic paper. to be achieved through the investigation.
Sense Wider More detailed.
It must contain the hypothesis or the It should describe the stages of the search in
problem that will be investigated in the sequence of execution. It must also relate
Elements
the work, as well as the delimitation of the object of the work with its particularities,
the topic. with greater delimitation.

Normally, the objectives are set before undertaking an action or investigation, as it is


much more convenient to know where we want to go before starting to walk. That is: we can only
find out which is the best route to success, if we first know what the goal we have set ourselves is.
That is why setting clear objectives is part of any planning in any area.

2.2.4 Construction of Achievement tests – steps, procedure and uses:


Any test designed to assess the achievement in any subject with regard to a set of predetermined
objectives.

Major steps involved in the construction of achievement test:

• Planning of test

• Preparation of a design for the test

27
• Preparation of the blue print

• Writing of items

• Preparation of the scoring key and marking scheme

• Preparation of question-wise analysis

1. Planning of test:

• Objective of the Test


• Determine the maximum time and maximum marks

2. Preparation of a design for the test:

• Important factors to be considered in design for the test are:


• Weightage to objectives
• Weightage to content
• Weightage to form of questions
• Weightage to difficulty level.

3. Weightage to objectives: This indicates what objectives are to be tested and what weigh-
tage has to be given to each objective.

Sl.No Objectives Marks Percentage


1 Remembering 3 12
2 Understanding 2 18
3 Application 6 24
4 Analysis 8 32
5 Synthesis 4 16
6 Evaluation 2 8
Total 25 100

4. Weightage to content: This indicates the various aspects of the content to be tested and
the weightage to be given to these different aspects.

Sl.No Content Marks Percentage


1 Sub topic-1 15 60
2 Sub topic-2 10 40
Total 25 100

5. Weightage to form of questions: This indicates the form of the questions to be included
in the test and the weightage to be given for each form of questions.

Sl. No Form of questions No. of Questions Marks Percentage


1 Objective type 14 7 28
2 Short answer type 7 14 56
3 Essay type 1 4 16
Total 22 25 100

28
6. Weightage to difficulty level: This indicates the total mark and weightage to be given to
different level of questions.

Sl. No Form of questions Marks Percentage


1 Easy 5 20
2 Average 15 60
3 Difficult 5 20
Total 25 100

7. Preparation of the blue print: Blue print is a three-dimensional chart giving the placement
of the objectives, content and form of questions.

8. Writing of items:

• The paper setter writes items according to the blue print.

• The difficulty level has to be considered while writing the items.

• It should also check whether all the questions included can be answered within the time
allotted.

• It is advisable to arrange the questions in the order of their difficulty level.

• In the case of short answer and essay type questions, the marking scheme is prepared.

• In preparing marking scheme, the examiner has to list out the value points to be credited
and fix up the mark to be given to each value point.

29
Marking Scheme:
Q.No Value points Marks Total Marks
Value point-1 1/2
Value point-2 1/2
1 2
Value point-3 1/2
Value point-4 1/2
Value point-1 1/2
Value point-2 1/2
2 2
Value point-3 1/2
Value point-4 1/2

9. Preparation of Question-wise Analysis

Question-wise Analysis
Estimated Time
Q.No Content Objectives Forms of Questions Difficulty Level Marks
(mins)
1 Sub-topic-1 Knowledge Objective Type Easy 1/2 1
2 Sub-topic-2 Understanding Objective Type Average 1/2 1
3 Sub-topic-2 Application Objective Type Easy 1/2 1
4 Sub-topic-1 Knowledge Objective Type Easy 1/2 1
5 Sub-topic-2 Understanding Objective Type Average 1/2 1
6 Sub-topic-1 Analysis Objective Type Average 1/2 1
7 Sub-topic-1 Synthesis Short Answer Difficult 2 3
8 Sub-topic-2 Application Short Answer Easy 2 3
9 Sub-topic-1 Analysis Essay Average 4 10

2.2.5 TYPE OF TEST ITEMS:


Objective Type
An objective type of test item is one which the response will be objective. Objective type test item
broadly classified into two:

1. Supply type (Recall Type) The respondent has to supply the responses.

2. Selection type (Recognition Type) The respondent has to select the responses from among the
given responses.

1. OBJECTIVE TYPE – 4 TYPES

• True – False Items (Alternate Response Type)


• Multiple Choice Items
• Matching Type Items
• Completion Type Test Items

2. ADVANTAGES OF OBJECTIVE TYPE ITEMS:

• A large amount of study material can be tested in a very short period time
• Economy of time.
• Objectivity of scoring.
• No bluffing

30
• It reduces the subjective element of the examiner to the minimum.
• If carefully planned, it can measure the higher mental process of understanding, applica-
tion, analysis, prediction and interpretation.

3. LIMITATIONS OF OBJECTIVE TYPE ITEMS

• Difficulty in preparing good items.


• Problem of guessing.
• Problem of cheating.
• Inefficiency in testing complicated skills
• High printing cost.
• Emphasis on testing superficial knowledge.

Short answer type:


• A question requiring three value points at most may be defined as a short answer question.

• Value points diminish the subjectivity.

• Help in ensuring wide coverage of content.

1. ADVANTAGES OF SHORT ANSWER TYPE ITEMS:

• Large portion of the content can be covered in a test.


• No opportunity for guessing.
• Easy to construct, because it measures a relatively simple outcome.
• It can be made quit objective by carefully fixing the value points.
• Useful in evaluating the ability to interpret diagrams, charts, graphs, etc.
• If carefully prepared, deep level objectives understanding, application and problem solving
skill can be evaluated.

2. LIMITATIONS OF SHORT ANSWER TYPE ITEMS:

• It is more subjective than the objective type of items.


• It may encourage student to memories fact and develop poor study habits.
• Mechanical scoring is not possible

Essay type:
• It is free response test item.

• Help in ensuring a wide coverage of content and variety of objectives.

• Help in evaluating complex skills.

1. ADVANTAGES ESSAY TYPE ITEMS:

• Easy to prepare.
• Useful in measuring certain abilities and skills.
• Permit the examinee to write down comprehensively what he knows about something.
• Promote originality and creative thinking.

31
• Possibility of guess work can be eliminated.
• Reduce chance on the spot copying.
• Low printing cost.

2. LIMITATIONS OF ESSAY TYPE ITEMS:

• Minimum validity.
• Lack of reliability.
• No objectivity.
• Rote memory is encouraged.
• It is a time consuming test item.

2.2.6 Construction of Diagnostic test – steps, uses and limitation:


Uses of Diagnostic Testing:
1. It takes up where the formative test leaves off.

2. It is a means by which an individual profile is examined and compared against certain norms
or criteria.

3. It focuses on individual’s educational weakness or learning deficiency and identify the gaps in
pupils.

4. It is more intensive and act as a tool for analysis of Learning Difficulties.

5. It is more often limited to low ability students.

6. It is corrective in nature.

7. It pinpoints the specific types of error each pupil is making and searches for underlying causes
of the problem.

8. It is much more comprehensive.

9. It helps us to identify the trouble spots and discovered those areas of students’ weakness that
are unresolved by formative test.

Steps of Educational Diagnostic Test:


1. Identification and classification of pupils having Learning Difficulties:

(a) Constant observation of the pupils.


(b) Analysis of performance: Avoiding assignments & copying from others.
(c) Informal classroom Unit/Achievement test.
(d) Tendency of with-drawl and gap in expected and actual achievement.

2. Determining the specific nature of the Learning Difficulty or errors:

(a) Observation.
(b) Analysis of oral responses.
(c) Written class work.
(d) Analysis of student’s assignments and test performance.

32
(e) Analysis of cumulative and anecdotal records.
3. Determining the Factors/Reasons or Causes Causing the learning Difficulty (Data Collection):
(a) Retardation in basic skills.
(b) Scholastic aptitude factors.
(c) Physical Mental and Emotional (Personal) Factors).
(d) Indifferent attitude and environment.
(e) Improper teaching methods, unsuitable curriculum, complex course materials.
4. Remedial measures/treatment to rectify the difficulties:
(a) Providing face to face interaction.
(b) Providing as may simple examples.
(c) Giving concrete experiences, use of teaching aids.
(d) Promoting active involvement of the students.
(e) Consultation of Doctors/Psychologists/Counsellors.
(f) Developing strong motivation.
5. Prevention of Recurrence of the Difficulties:
(a) Planning for non-recurrence of the errors in the process of learning.

Construction of Diagnostic Test:


The following are the broad steps involved in the construction of a diagnostic test. Diagnostic Test
may be Standardized or Teacher made and more or less followed the principles of test construction
i.e., preparation, planning, writing items, assembling the test, preparing the scoring key and marking
scheme and reviewing the test.

The Unit on which a Diagnostic Test is based should be broken into learning points
without omitting any of the item and various types of items of test is to be prepared in
a proper sequence:
1. Analysis of the context minutely i.e., major and minor one.
2. Forming questions on each minor concept (recall and recognition type) in order of difficulty.
3. Review the test items by the experts/experienced teacher to modify or delete test items if
necessary.
4. Administering the test.
5. Scoring the test and analysis of the results.
6. Identification of weakness
7. Identify the causes of weakness (such as defective hearing or vision, poor home conditions,
unsatisfactory relations with classmates or teacher, lack of ability) by the help of interview,
questionnaires, peer information, family, class teacher, doctor or past records.
8. Suggest remedial programme (No set pattern).

Motivation, re-teaching, token economy, giving reinforcement, correct emotion, changing sec-
tion, giving living examples, moral preaching’s.

33
Elements of Diagnostic Tests:

Barriers in Diagnostic Tests:


1. Attitudinal change.

2. Will Power and patience of the teacher.

3. Time Scheduling.

4. Sequencing of Study.

5. Faulty method of data collection and test.

6. Maintaining records impartially.

7. Costs.

2.3 Remedial Measures


Remedial instruction aims to improve a skill or ability in each student. Using various techniques,
such a more practice or explanation, repeating the information and devoting more time to working
on the skills, the teachers guide each student through the educational process.

A student that might, for example, have a low reading level might be given remediation on a one on
one basis, phonic instruction and practice reading text aloud.

2.3.1 Need
It aims to cater for individual differences, help students who lag behind, develop interpretation skills
and help students in critical thinking skills in the learning of map work.

2.3.2 Types
Small Group Tutoring
Remedial courses often send ‘remedial students’ off into small groups to support students who are
falling behind. Often, schools bring in specialists who peel off students into small groups to focus on

34
specific interventions.

Similarly, a common teaching strategy is to allow higher achieving students to work in groups alone.
This gives time for the teacher to spend focused time with a small group of students who need
additional support.

One-To-One Tutoring
One-to-one tutoring has either a trained specialist, the classroom teacher, or a volunteer spend
individual time with a student. While it is an effective way of supporting students, it is resource
intensive. It is often hard to find enough time and staff to have one-to-one interventions while also
supporting the rest of the class. Some parents opt for paid private one-to-one tutoring to address
this shortfall.

Private Tutoring
Private tutoring is one of the most popular formats for remedial support. Parents who have the
funds to send their children to after-school tutoring may use this as an option to help ensure their
students keep up with their peers.

Specialist Tutoring
Trained specialists, such as in the reading recovery program, can provide research-based systematic
programs of support to help students reach benchmarks. Often, schools employ trained specialists
to come into classrooms and take one-to-one or small-group sessions with students in need.

Peer Tutoring
Peer tutoring involves one student helping another student on their work. This may take the form
of older students coming into the classroom to help younger students. Or, it may be getting more
advanced students in the same class to pair up with less advanced students to help them learn.

Volunteer Tutoring
Schools often rely on volunteer tutors to help provide additional support to remedial students. This
may take the form of ‘parent helpers’ who come into the classroom to help the teacher and get to
know the class better. A challenge of volunteer tutoring is providing sufficient training and support
for the volunteers so they can effectively help students.

Withdrawal System
A withdrawal system involves removing students entirely from a mainstream classroom for a short
(one lesson) or long (indefinitely) time to give tailored support.

The challenge of withdrawal systems is that it might stigmatize students and exclude them from
participation in mainstream activities. Exclusion based on special needs is highly discouraged by
contemporary education scholars.

Computer Assisted Interventions


Computer assisted interventions (CAIs) provide remedial education via computerized lessons.Computers
have some potential Benefits for students who are falling behind, including:

• Self-paced lessons for mastery of content

35
• Pause and rewind possibilities
• Accessibility for rural and remote students
However, there are some challenges of CAIs such as:
– Potential lack of synchronous teacher-student interaction
– Cost of use of technologies and internet

2.3.3 Strategies
1. Teachers should modify the curriculum to suit students’ learning styles and abilities.
2. To gain expertise, the teacher should set some simple teaching objectives.
3. Textbooks should not be used to guide teaching and should not be considered the school
curriculum.
4. Teachers should be encouraged to follow cross-curricular teaching guidelines by flexibly con-
necting similar teaching areas so that more time can be spent on effective practices and learning.
5. Teachers should be able to create materials of various quality using information from the
internet, newspapers, magazines, and the Education Department’s references.
6. Before moving on to abstract ideas, teachers should include concrete and useful examples and
continue at a speed that is appropriate for the student’s learning abilities.
7. Teachers should use more teaching aids, games, and events to promote active participation
from students. They can also use information technology and all available teaching tools to
assist students.

2.4 Quality assurance in tools


2.4.1 Reliability
The dictionary meaning of reliability is consistency, dependence or trust. So in measurement re-
liability is the consistency with which a test yields the same result in measuring whatever it does
measure. A test score is called reliable when we have reason for believing the score to be stable and
trust-worthy. Stability and trust-worthiness depend upon the degree to which the score is an index
of time-reliability’ is free from chance error. Therefore, reliability can be defined as the degree of
consistency between two measurements of the same thing.

It is not always possible to obtain perfectly consistent results. Because there are several factors
like physical health, memory, guessing, fatigue, forgetting etc. which may affect the results from
one measurement to other. These extraneous variables may introduce some error to our test scores.
This error is called as measurement errors. So while determining reliability of a test we must take
into consideration the amount of error present in measurement. Methods of Determining Reliability:
For most educational tests the reliability coefficient provides the most revealing statistical index of
quality that is ordinarily available. Estimates of the reliability of test provide essential information
for judging their technical quality and motivating efforts to improve them. The consistency of a test
score is expressed either in terms of shifts of an individual’s relative position in the group or in terms
of amount of variation in an individual’s score.
1. Relative Reliability or Reliability Coefficient:
In this method the reliability is stated in terms of a coefficient of correlation known as reliability
coefficient. Hence we determine the shifting of relative position of an individual’s score by
coefficient of correlation.

36
2. Absolute Reliability or Standard error of Measurement:

Methods of Determining Relative Reliability or Reliability Coefficient:


There are three methods of determining reliability coefficient, such as:
1. Test-Retest Method:
This is the simplest method of determining the test reliability. To determine reliability in this
method the test is given and repeated on same group. Then the correlation between the first
set of scores and second set of scores is obtained. A high coefficient of correlation indicates
high stability of test scores. If it is administered within a short interval say a day or two, then
the pupil will recall their first answers and spend their time on new material. It will tend to
increase their score in second administrations. If interval is too long say one year, then the
maturation effect will affect the retest scores and it will tend to increase the retest scores. In
both the cases it will tend to lower the reliability. So what should be the time gap between
two administrations depends largely on the use and interpretation of test scores. Due to its
difficulties in controlling conditions which influence the scores of retest, reduces the use of
test-retest method in estimating, reliability coefficient.

2. Equivalent Forms/Parallel Forms Method:


Reliability of test scores can be estimated by equivalent forms method. It is also otherwise
known as Alternate forms or parallel forms method. When two equivalent forms of tests are
constructed, the correlation between the two may be taken as measures of the self-correlation
of the test. In this process two parallel forms of tests are administered to the same group
of pupils in short interval of time, then the scores of both the tests are correlated. This
correlation provides the index of equivalence. Usually in case of standardized psychological and
achievement tests the equivalent forms are available. Both the tests selected for administration
should be parallel in terms of content, difficulty, format and length. When time gap between
the administrations of two forms of tests are provided the coefficient of test scores provide a
measure of reliability and equivalence. But the major drawback with this method is to get two
parallel forms of tests. When the tests are not exactly equal in terms of content, difficulty,
length and comparison between the scores obtained from these tests may lead to erroneous
decisions.

3. Split-Half Method:
There are also methods by which reliability can be determined by a single administration of
a single test. One of such method is split-half method. In this method a test is administered
to a group of pupils in usual manner. Then the test is divided into two equivalent values and
correlation for these half-tests are found. The common procedure of splitting the test is to take
all odd numbered items i.e. 1, 3, 5, etc. in one half and all even-numbered items i.e. 2, 4, 6,
8 etc. in the other half Then scores of both the halves are correlated by using the Spearman-
Brown formula.
2r1
r2 = (2.1)
1 + r1
Where,
r2 = Reliability coefficient on full test
r1 = Correlation of coefficient between half tests. For example, by correlating both the halves
we found a coefficient of .70. By using formula ((2.1)) we can get the reliability coefficient on
full test as:
2 × .70 1.40
r2 = = = .82 (2.2)
1 + .70 1.70
The reliability coefficient .82 when the coefficient of correlation between half test is .70. It
indicates to what extent the sample of test items are dependable sample of the content being
measured—internal consistency.

37
2.4.2 Validity
Validity is the most important characteristic of an evaluation programme, for unless a test is valid it
serves no useful function. Psychologists, educators, guidance counselors use test results for a variety
of purposes. Obviously, no purpose can be fulfilled, even partially, if the tests do not have a suffi-
ciently high degree of validity. Validity means truth-fullness of a test. It means to what extent the
test measures that, what the test maker intends to measure.
It includes two aspects:
What is measured and how consistently it is measured. It is not a test characteristic, but it refers to
the meaning of the test scores and the ways we use the scores to make decisions. Following definitions
given by experts will give a clear picture of validity.

Validity of an evaluation device is the degree to which it measures what it is intended to mea-
sure. Validity is always concerned with the specific use of the results and the soundness of our
proposed interpretation.

It is not also necessary that a test which is reliable may also be valid. For example, suppose a
clock is set forward ten minutes. If the clock is a good time piece, the time it tells us will be reliable.
Because it gives a constant result. But it will not be valid as judged by ‘Standard time’. This
indicates “the concept that reliability is a necessary but not a sufficient condition for validity.”

Methods of determining Validity:


1. Construct validity
What is a construct?
A construct refers to a concept or characteristic that can’t be directly observed, but can be
measured by observing other indicators that are associated with it. Constructs can be char-
acteristics of individuals, such as intelligence, obesity, job satisfaction, or depression; they can
also be broader concepts applied to organizations or social groups, such as gender equality,
corporate social responsibility, or freedom of speech.
Example:
There is no objective, observable entity called “depression” that we can measure directly. But
based on existing psychological research and theory, we can measure depression based on a
collection of symptoms and indicators, such as low self-confidence and low energy levels.
What is construct validity?
Construct validity is about ensuring that the method of measurement matches the construct
you want to measure. If you develop a questionnaire to diagnose depression, you need to know:
does the questionnaire really measure the construct of depression? Or is it actually measuring
the respondent’s mood, self-esteem, or some other construct? To achieve construct validity, you
have to ensure that your indicators and measurements are carefully developed based on rele-
vant existing knowledge. The questionnaire must include only relevant questions that measure
known indicators of depression.
2. Content validity:
Content validity assesses whether a test is representative of all aspects of the construct. To
produce valid results, the content of a test, survey or measurement method must cover all rele-
vant parts of the subject it aims to measure. If some aspects are missing from the measurement
(or if irrelevant aspects are included), the validity is threatened.
Example
A mathematics teacher develops an end-of-semester algebra test for her class. The test should
cover every form of algebra that was taught in the class. If some types of algebra are left out,
then the results may not be an accurate indication of students’ understanding of the subject.
Similarly, if she includes questions that are not related to algebra, the results are no longer a

38
valid measure of algebra knowledge. Receive feedback on language, structure and layout
Professional editors proofread and edit your paper by focusing on:

• Academic style
• Vague sentences
• Grammar
• Style consistency

3. Face validity:
Face validity considers how suitable the content of a test seems to be on the surface. It’s similar
to content validity, but face validity is a more informal and subjective assessment.
Example:
You create a survey to measure the regularity of people’s dietary habits. You review the survey
items, which ask questions about every meal of the day and snacks eaten in between for every
day of the week. On its surface, the survey seems like a good representation of what you want
to test, so you consider it to have high face validity.
As face validity is a subjective measure, it’s often considered the weakest form of validity.
However, it can be useful in the initial stages of developing a method.

4. Objectivity Objectivity is an important characteristic of a good test. It affects both validity


and reliability of test scores. Objectivity of a measuring instrument moans the degree to which
different persons scoring the answer receipt arrives of at the same result.

(a) Objectivity of Scoring:


Objectivity of scoring means same person or different persons scoring the test at any
time arrives at the same result without may chance error. A test to be objective must
necessarily so worded that only correct answer can be given to it. In other words, the
personal judgement of the individual who score the answer script should not be a factor
affecting the test scores. So that the result of a test can be obtained in a simple and
precise manner if the scoring procedure is objective. The scoring procedure should be
such that there should be no doubt as to whether an item is right or wrong or partly right
or partly wrong.
(b) Objectivity of Test Items:
By item objectivity we mean that the item must call for a definite single answer. Well-
constructed test items should lead themselves to one and only one interpretation by stu-
dents who know the material involved. It means the test items should be free from
ambiguity. A given test item should mean the same thing to all the students that the
test maker intends to ask. Dual meaning sentences, items having more than one correct
answer should not be included in the test as it makes the test subjective.

5. Usability:
Usability is another important characteristic of measuring instruments. Because practical con-
siderations of the evaluation instruments cannot be neglected. The test must have practical
value from time, economy, and administration point of view. This may be termed as usability.

So while constructing or selecting a test the following practical aspects must be taken
into account:

1. Ease of Administration:
It means the test should be easy to administer so that the general class-room teachers can use
it. Therefore, simple and clear directions should be given. The test should possess very few
subtests. The timing of the test should not be too difficult.

39
2. Time required for administration:
Appropriate time limit to take the test should be provided. If in order to provide ample time
to take the test we shall make the test shorter than the reliability of the test will be reduced.
Gronlund and Linn (1995) are of the opinion that “Somewhere between 20 and 60 minutes
of testing time for each individual score yielded by a published test is probably a fairly good
guide”.

3. Ease of Interpretation and Application:


Another important aspect of test scores are interpretation of test scores and application of test
results. If the results are misinterpreted, it is harmful on the other hand if it is not applied,
then it is useless.

4. Availability of Equivalent Forms:


Equivalent forms tests help to verify the questionable test scores. It also helps to eliminate
the factor of memory while retesting pupils on same domain of learning. Therefore, equivalent
forms of the same test in terms of content, level of difficulty and other characteristics should
be available.

5. Cost of Testing:
A test should be economical from preparation, administration and scoring point of view.

Inter dependence of Validity, Reliability and Objectivity

40
Developing Assessment Tools, Techniques & Strategies – II
3
Contents
3.1 CCE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.1.1 Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.1.2 Need . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.1.3 Aims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.1.4 Importance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.1.5 Relation with Formative Assessment . . . . . . . . . . . . . . . . . . . . . . . 43
3.1.6 Salient features of Formative Assessment . . . . . . . . . . . . . . . . . . . . . 43
3.1.7 Problems faced by Teachers and Students . . . . . . . . . . . . . . . . . . . . 43
3.2 Meaning & construction of process-oriented tools . . . . . . . . . . . . . . . . . . . . 44
3.2.1 Interview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.2.2 Inventories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.2.3 Observation schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.2.4 Check-list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.2.5 Rating scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.2.6 Anecdotal record . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.3 Assessment of group processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.3.1 Nature of group dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.3.2 Socio-metric techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.3.3 Steps for formation of group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.3.4 Criteria for assessing tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.3.5 Criteria for assessment of social skills in collaborative/ co-operative learning
situations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.4 Promoting Self-assessment and Peer assessment . . . . . . . . . . . . . . . . . . . . . 55
3.4.1 Self-assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.4.2 Peer assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.5 Portfolio assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.5.1 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.5.2 Uses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.5.3 Developing and assessing Portfolio . . . . . . . . . . . . . . . . . . . . . . . . 58
3.5.4 Developing of Rubric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

3.1 CCE
Concept, need, importance, relation with FA & problems faced

41
3.1.1 Concept
Continuous and comprehensive evaluation was a procedure of assessment directed by Right to edu-
cation act in 2009. The assessment was introduced by state government in India as well as by central
board of education in India, for students of sixth to tenth class. CCE is a system of school based
assessment and evaluation of students that covers all features of students’ development.

According to a CBSE, “it is a developmental process of assessment which emphasizes on two fold ob-
jectives: continuity in evaluation and assessment of broad based learning and behavioural outcomes.

According to this scheme the term continuous is meant to accentuate that evaluation of identified
aspects of students’ growth and development is a continuous process than an incident build into the
total teaching learning process and spread over a duration of academic session. The term compre-
hensive means that the scheme tries to cover both the scholastic and the co-scholastic aspects of
students’ growth and development.

3.1.2 Need
Help develop cognitive, psychomotor and affective skills
1. Develop students’ thinking processes and memory
2. Make continuous evaluation an integral part of the teaching-learning process
3. Use evaluation data for improving teaching-learning strategies
4. Utilise assessment data as a quality control device to raise academic outcomes
5. Enable teachers to make student-centric decisions about learners’ processes of learning and
learning environments
6. Transform teaching and learning into a student-centric activity.

3.1.3 Aims
1. To access every aspect of child during their presence at school.
2. CCE minimizes stress on students.
3. Make assessment regular and comprehensive.
4. provide a tool for detection and improvement.
5. provide learner with greater skill.

3.1.4 Importance
1. It helps the learners identify the challenges faced in education.
2. It is aimed at diagnosing the problematic areas in the development of children apart from their
academic results.
3. It increases the punctuality and regularity of the students. They would try to do their assign-
ments to their entire satisfaction.
4. It provides motivation to the students to work thoroughly with consistency without wasting
time.
5. It can be served as a basis to award scholarships and fee concessions.

42
3.1.5 Relation with Formative Assessment
Formative assessment is an active learning process that the teachers and the students continuously
and systematically improving students’ achievement. Teachers and their students actively engage in
the formative assessment process to focus on learning goals and take action to move closer to the
goal.

3.1.6 Salient features of Formative Assessment


1. Diagnostic and remedial.

2. Makes the provision for effective feedback.

3. Provides the platform for the active involvement of students in their own learning.

4. Enables teachers to adjust teaching to take account of the results of assessment.

5. Recognizes the profound influence assessment has on the motivation and self-esteem of students.

6. Recognizes the need for students to be able to assess themselves and understand how to improve.

7. Builds on students’ prior knowledge and experience in designing what is taught.

8. Incorporates varied learning styles into deciding how and what to teach.

9. Encourages students to understand the criteria that will be used to judge their work.

10. Offers an opportunity to students to improve their work after feedback.

11. Helps students to support their peers, and expect to be supported by them.

3.1.7 Problems faced by Teachers and Students


Almost one quarter of the teachers say they do not face any barriers to conducting formative as-
sessment. Among those who did indicate barriers, the most frequently selected barriers were related
to time. A teacher reporting of barriers to formative assessment is consistent across subject areas
and grade levels. Years of experience teaching was related to teachers reporting insufficient train-
ing for formative assessment and facing no barriers. With the shift from print to digital, students
receive hundreds of pieces of feedback from the thousands of keystrokes that make up their digital day.

The problem is that, other than proprietary walled gardens, none of that feedback is collected
consistently and presented in a unified manner. This spring they catalogued formative assessment
products with a focus on those that were more authentic and open ended. It was disappointing to
find that most teachers still use spreadsheets to manually enter and track formative assessment data.
They spotted four problems:

1. Different standards

2. No common tagging scheme for content and assessment

3. No agreement on competency

4. Inadequate tools

43
Digital learning and the explosion of formative data means the beginning of the end of weeklong state
tests. By using thousands of formative observations it will be increasingly easy to accurately track
individual student learning progressions. But making better use of the explosion of formative data will
require leadership and investment. This is an education problem more than a technology problem. It
would help if school networks agreed on competency-based protocols and used their market leverage
to drive investment to solutions. Thus the followings are lack in Formative Assessment:

1. Lack of experienced, honest and sincere teachers.

2. Take more time to undertake several activities.

3. Sometimes misused by the teachers.

4. Lack of facilities and co-ordination in schools.

5. There may be weaker students for remedial work.

3.2 Meaning & construction of process-oriented tools


3.2.1 Interview
“Interview means a serious conversation which is done by some purpose”. Interview means commu-
nication or conversation between two persons initiated by interviewer for collecting the information
about research keeping in mind the objectives of the interview. Here the information is collected
directly by verbal communication between two or more persons and the responses of the respondents
are noted. It is a purposeful and serious conversation.

The important aspect of interview is establishment of intimacy and to get response from respondent.
Thus, the interview is a process of communication or interaction in which the respondent delivers
the required information to the interviewer face-to-face. It is used effectively to collect the useful
information in many research situations.

When the researcher is extremely conscious about asking the questions in his presence to exhibit
his personal interactive objectives, the researcher uses this process of questioning which is called
interview. Here the information is collected from the people verbally with their physical presence.
The responses of the respondent are then collected by the interviewer in a separate sheet. It can be
conducted by the interviewer in person or in group. When the interviewer is conducted in group,
the size of the group should not be so large that it inhibits participation of most of the members and
at the same time if should not be so small that it lacks substantially greater converge than in the
individual interview. The optimum size is approximately 10-12 persons.

Social, intellectual and educational homogeneity is important for effective participation of all group
members. A circular seating arrangement, with the interviewer as one of the group, is conducive to
full and spontaneous reporting and participation. The interview can be conducted one or more times
as per requirement. As a tool for research interview is used as formal and informal, directional and
non-directional interview.

Characteristics of Interview:

1. It is social interaction

2. It is a sincere method

3. It is direct purposeful conversation.

44
4. It involves various direct involvements of interviewer and respondent.

5. It involves various forms of questions to be asked to the respondent.

6. It is a purposeful and serious conversation.

7. It involves establishment of intimacy between the interviewer and respondent.

8. It is a process of communication or interaction.

9. It involves the note of responses delivered by the respondent.

10. It involves the face-to-face involvement of the respondent and the interviewer.

11. It can be conducted one or more time.

12. It is a tool to collect the useful information in many research situations,

13. It can be in person or in group.

14. It exhibits the response excitement by the respondent.

15. It is a behavioural method.

16. It indicates social, intellectual and educational homogeneity.

17. It may be formal, informal, directional or non-directional interview.

Construction
The steps of interview include preparation of interview, execution of interview, note taking and anal-
ysis of the information.

Following are the steps of interview:

Preparation for Interview


Preparation for interview includes the objectives of interview and preparing the interview register.
It is actually the mental preparation of the interviewer for the interview. It includes thinking for the
objectives, type of interview, number of interviewer, position, place and time of interview etc. by
the investigator.

Objectives of interview
In this step, the general aims of research are converted into specific objectives. The area, information
to be collected, the respondents and the type of interview is decided according to objective.

Prepare an Interview Register


While preparing for the interview register the objectives of research are used to frame the questions.
The research problem, related variables and the samples are considered. The information of good
questions is based on subject matter, inspiration, realities, attitude, expectation of information and
the intellect of interviewer and his rapport to develop relations. These questions can be objective or
subjective, specific or general, fixed answer or free of giving any answer etc.

Proper training, guidance and experience assure a good interview. It is a chain of appropriate
questions and answers. The answers to the effective questions depend on the content, motivation,
attitudes, expectation of information, time of interview and ability of the interviewer to establish the

45
intimacy. The responses can be objective or subjective, special or general, free response or restricted.

After the careful evaluation and critical thinking of the above aspects the appropriate types of
questions are planned and a register is prepared whereby the investigator can use the appropriate
type of questions. It can be in the form of questions, fill in the blanks, rating scale, checklist etc.
The responses can be worked of accordingly.

Execution of Interview
The execution of interview means conducting the interview. As per the pre-plan whether be it the
personal or group interview, before starting the interview it is necessary to disclose the personal
identity and the objectives and type of interview. The investigator should bear the tape recorder,
camera, if necessary and the interview register. The instructions, if any and necessary is deliv-
ered to the respondents. The execution of interview included establishing the rapport and eliciting
information.

Establishing Rapport
To get the necessary, relevant, important and all the information related to the subject it is necessary
to gain the confidence of respondent and thus leading towards a good and successful interview. It is
necessary that the interviewer should be polite, well dressed, cool, calm, patient, decent and capable
of questioning and must bear good understanding. The investigator should himself be clear with
the questions and their responses and the objectives of interview. The investigator should be skilful,
positive, joyous, unbiased, capable, and free of any rational and bear the attitude of sympathy thus
establishing a good rapport with the respondents.

Seeking the Information


In pre-planned series asking appropriate questions without hurting the feelings of respondent and
getting necessary and relevant information is important hence care should be taken that if in any case
the respondent gets distracted of the point then those points should be flexible and the respondent
is not bored of the interview and thus the information could be obtained.

Note taking
The final step of the interview with the respondent used a paper sheet, predesigned answer sheet,
tape recorder or video recorder as per the requirement. Information is then minimized through
analysis. To note the complete information from the respondent various activities, skill and talent
could be used.

Analysis of the collected Information


In this step the investigator does the assessment of the respondent’s view as per the pre-decided
structure. Here the information provided by the respondent is analysed and transformed into spe-
cific group or class or category. Then with reference to the objectives of research the analysis and
interpretation of the data is done.

3.2.2 Inventories
Meaning
A concept inventory is a test to assess students’ conceptual understanding in a subject area.
It consists of multiple-choice questions in which several items are used to evaluate understanding for
each concept.

46
A key feature is that the items evaluate not simply whether a student gets an answer correct or
incorrect, but the nature or quality of the student understands. Each incorrect response option for
an item reflects a different type of understanding of the concept.

A concept inventory is a criterion-referenced test designed to help determine whether a student


has an accurate working knowledge of a specific set of concepts. In general, item difficulty values
ranging between 30% and 70% are best able to provide information about student understanding.
Inventories involves survey or questionnaire

Characteristics of inventories
Questions will be direct.
There will be no specific correct answers.

3.2.3 Observation schedule


Observation schedule is a method in which data in the field is collected with the help of observation
by observer. An observation schedule is an analytical form, or coding sheet, filled out by researchers
during structured observation. It carefully specifies beforehand the categories of behaviours or events
under scrutiny and under what circumstances they should be assigned to those categories. Obser-
vations are then fragmented, or coded, into these more manageable pieces of information, which
are later aggregated into usable, quantifiable data. Observation schedules are utilized primarily in
the fields of education, psychology, speech and language therapy, learning and behavioural therapy.
Schedules can range from exceedingly complex multiple-page examinations to simple tally sheets.
1. Structured and unstructured

2. Controlled and uncontrolled

3. Participant and non-participant


The contents or the inclusions of an observation schedule template varies depending on the observa-
tion to be performed or conducted. Before making or creating an observation template, one should
take note of the type of observation to be done and what is going to be observed.
1. In every observation schedule template or any document, there will always be and there must
be a heading. It is one of the most common parts of any types of observation schedule template
and it helps determine what type of observation is being done and who or what is going to be
observed.

2. The date when the observation is going to conducted is another common part of an observation
schedule. The date includes the number of days or the span of time needed to complete the
actual observation, the specific time of day when the observation will be done, as well as when
results of the observation done will be available.

3. The names of the individuals or groups that are involved in the particular observation activity.
These individuals or groups, include teachers or other members of the school faculty, students,
employees, managers, supervisors, the observer or elevator, etc.

4. The topic or the main focus of the said observation activity is also included, and often the goals
and objectives are written on top as a guide for both the observer and the one being observed.

5. There are also instructions or directions provided on some observation schedule templates,
especially if there are specific things that needs to be observed and collected in a particular
observation.

47
6. A legend or abbreviation for scoring or evaluation is made available in some observation schedule
templates to allow the users to provide the information that they have gathered in a uniform
way.

7. Remarks are included if they are necessary or if they are applicable.

8. The list of specific and related observation tasks or activities involved in the observation being
conducted.

9. Tests and related questions that are being asked before, during and after the observation.

3.2.4 Check-list
It is one of the specific instruments for evaluation. Checklist is in the forms questionnaire. In this the
answers of the questions are given checklist can be used for self-evaluation or for other’s evaluation.
It exhibits if the student has any particular characteristics or not and thus helps in the evaluation
of the students.

Characteristics of Checklist: Checklist is used for evaluation of self and others. It is used
as an instrument of observation. It involves questions and its answers. It involves signs by the
respondent. It involves the characteristics about a particular subject to be evaluated. Construction
and Application of Checklist: The first horizontal line of the check list is used to write the name or
number of the subject under observation.

The characteristics of the subject or thing to be evaluated are arranged in vertical column of
the evaluation sheet with the corresponding blank options to place the tick mark in the adjacent
columns. Then the characteristics present in the subjects under observation are decided and if that
characteristic is present in the subject then the tick mark is placed in that column. Then after the
frequency of all tick mark is counted and marks are given to students on the basis of predefined
norms or standards. Then the percentage, mean, median or correlation is used.

Uses of Checklist:

1. It is useful for survey and research.

2. The amount of characteristics or traits of subjects can be known.

3. It is helpful to give the appropriate guideline to the subjects.

4. To know the developmental direction of the specific behaviour pattern check list is used.

5. It is useful for self-evaluation and other’s evaluation.

3.2.5 Rating scale


By observing the various school and college activities we find change in behaviour of students. Over
and above that various personal characteristics are also observed. These characteristics separate
the human behaviour. The teacher observes such type of behaviour of students by his insight and
intelligence and hence evaluates the personality of the student. If this behaviour of the students is
evaluated through rating scale, then it becomes more reliable.

The technique of observation or the tool with the help of which the researcher or observer ob-
serves externally the amount of the various characteristics developed in a person and takes a note
of it methodologically is called rating scale. Here the evaluation is done in relation to their opinion.

48
Such a tool or instrument which converts the opinion into numbers is called rating scale. It can be
used to evaluate the personality traits, creative skills, individual or social adjustment etc.

The following are the main scales.


1. Numerical Scales: One of the simplest scales to construct and easiest to use, is the numerical
rating scale. This type of tool usually consists of several items each of which names or de-
scribes the behaviour to be rated, and then offers as alternative responses a series of numbers
representing points along the scale. This simple numerical scale does have face validity and
therefore seems to be widely accepted. It is more subjective or bias tool.
2. Graphic Scale: If the format of the rating scale in such that the characteristics to be rated is
represented as a straight line along which are placed some verbal guides, the tool is referred to
as a graphic rating scale. It is easy to construct and easy to administer therefore it is widely
used of all the specific types of rating scales, but it is less reliable measure. iii. Standard Scale:
In the standard scale approach an attempt is made to provide the rater with more than verbal
cues in describe various scale points. Ideally, several samples of the objects to be rated are
included each with a given scale value which have been determined in experimental studies
prior to the use of the scale.
3. Check Lists: An approach which is widely popular because it is simple to administer and
still permits wide coverage in short time is the behaviour check list. It contains a long list
of specific behaviours which supposedly represented individual differences, and rater simply
checks whether the item applies. The behaviour index of individual is obtained by summing up
the items, which have been checked. The modified check list or for reliable result, it is essential
for each item as applicable or not applicable or not known.
4. Forced Choice Scale: One of the most recent innovations in the rating scale area has been
developed a forced choice technique which has been designed to overcome the major difficulties
faced on with earlier techniques. In a forced choice rating the rater is required to consider not
just one attribute, but several characteristics all at one time. Assuming that relevant item is
difficult for a better to distinguish from which is not predictive if both are equally favourable
to the person, the format requires that only few of several behaviours listed in each item be
selected as applicable. For example: Item form forced choice rating scale.
(a) Fair:
i. Insists upon his subordinates being peruse exact.
ii. Stimulate associates to be interested in their work.
(b) Unfair
i. Allows him to become burdened with detail.
ii. Does not point outwhen work appropriate statement. Rater is asked to select one
which is most appropriate statement.
5. Ranking Method: It is not possible that rater can accurately judge equivalent distances at
various points along the scale. Under these conditions a ranking method which requires only
that subjects who are being rated to be placed in order of each trait can be used. This approach
is essential for large number of persons are to be rated. The ranking approach has the advantage
of forcing the judge to make a definite discrimination among these rates by eliminating the
subjective differences faced by the judges, second advantage that group ranking is uniform.
6. Q Method: Another relative ranking method is so called Q-Sort developed by Stephenson 1953.
It is one of the best approaches to obtain a comprehensive description of an individual while
ranking method gives the comprehensive friction of a group of the individuals. Therefore,
Q-Sort is widely used for rating person’s school or one the hob for individual guidance.

49
Importance of Rating Scale
1. Any characteristic can be measured through rating scale.

2. It is helpful to evaluate the behaviour which other tools can hardly deal with.

3. Abstract characteristics can be evaluated by rating scales.

4. It is helpful to personality or the social development of person.

5. The level of each characteristic of each student of the class can be known.

6. It is helpful to deliver all the necessary information related to the progress of students.

7. The rating scale is also useful for the measurement of other methods or techniques.

8. Within less time more opinions can be obtained.

3.2.6 Anecdotal record


An anecdotal record is an observation that is written like a short story.
They are descriptions of incidents or events that are important to the person observing. Anecdotal
records are short, objective and as accurate as possible. Anecdotal record is a record of some signif-
icant item of conduct, a record of an episode in the life of students, a word picture of the student
in action, a word snapshot at the moment of the incident, any narration of events in which may be
significant about his personality.

Characteristics of anecdotal records


1. They should contain a factual description of what happened, when it happened, and under
what circumstances the behaviour occurred.

2. The interpretations and recommended action should be noted separately from the description.

3. Each anecdotal record should contain a record of a single incident.

4. The incident recorded should be that is considered to be significant to the students’ growth
and development of example.

5. Simple reports of behaviour

6. Result of direct observation.

7. Accurate and specific

8. Gives context of child’s behaviour

9. Records typical or unusual behaviours

Purpose
1. To furnish the multiplicity of evidence needed for good cumulative record.

2. To substitute for vague generalizations about students’ specific exact description of behaviour.

3. To stimulate teachers to look for information i.e pertinent in helping each student realize good
self- adjustment.

50
4. To understand individual’s basic personality pattern and his reactions in different situations.

5. The teacher is able to understand her pupil in a realistic manner.

6. It provides an opportunity for healthy pupil- teacher relationship.

7. It can be maintained in the areas of behaviour that cannot be evaluated by other systematic
method.

8. Helps the students to improve their behaviour, as it is a direct feedback of an entire observed
incident, the student can analyse his behaviour better.

9. Can be used by students for self-appraisal and peer assessment.

Construction

10. Keep a notebook handy to make brief notes to remind you of incidents you wish to include in
the record. Also include the name, time and setting in your notes.

11. Write the record as soon as possible after the event. The longer you leave it to write your
anecdotal record, the more subjective and vague the observation will become.

12. In your anecdotal record identify the time, child, date and setting

13. Describe the actions and what was said.

14. Include the responses of other people if they relate to the action.

15. Describe the event in the sequence that it occurred.

16. Record should be complete.

17. They should be compiled and filed.

18. They should be emphasized as an educational resource.

19. The teacher should have practice and training in making observations and writing records.

3.3 Assessment of group processes


3.3.1 Nature of group dynamics
1. Group: A group refers to two or more people who share a common meaning and evaluation of
themselves and come together to achieve common goals. In other words, a group is a collection
of people who interact with one another.

2. Group Dynamics: Group dynamics deals with the attitudes and behavioural patterns of a
group. Group dynamics concern how groups are formed, what is their structure and which
processes are followed in their functioning. Thus, it is concerned with the interactions and
forces operating between groups. Group dynamics is relevant to groups of all kinds – both
formal and informal.

3. Characteristics of a Group:

(a) 2 or more persons (if it is one person, it is not a group)


(b) Formal social structure (the rules of the game are defined)

51
(c) Common fate (they will swim together)
(d) Common goals (the destiny is the same and emotionally connected)
(e) Face-to-face interaction (they will talk with each other)
(f) Interdependence (each one is complimentary to the other).
(g) Self-definition as group members (what one is who belongs to the group)
(h) Recognition by others (yes, you belong to the group).
4. Group Dynamics – 4 Important Characteristics
(a) Describes how a group should be organised and operated. This includes pattern of lead-
ership and coop-eration.
(b) Consists of a set of techniques such as role playing, brainstorming, group therapy, sensi-
tivity train-ing etc.
(c) Deals with internal nature of groups, their formation, structure and process, and the way
they affect individual members, other groups and the organisation as a whole.
(d) Refers to changes which take place within groups and is concerned with the interaction
and forces ob-tained between group members in a social setting.
5. The nature of Group Dynamics
(a) Orienting assumption
(b) Groups are Real
(c) Group processes are real
(d) Groups are more than the sum of their parts
(e) Groups are living systems
(f) Groups are influential
(g) Groups shape society.

3.3.2 Socio-metric techniques


Socio-metric technique or test as one of the non-testing devices was first developed by J.L. Moreno
and Hellen Jenningaround 1960. It is a means of presenting simply the structure of social relations,
lines of communication and the patterns of friendship, attractions and rejection that exist at a given
time among members of a particular group. It is commonly observed that some students always like
to stay together, some students are more liked by all students, some students aren’t liked by anyone
and so on. These social relationships existing among them influence all aspects of their development.
It is therefore necessary for the teacher to evaluate these social relationships that exist among the
pupils or students. This socio-metric technique is a method of evaluating the social acceptance of
individual students. In this technique one can know which student would be congenial for a working
group or companions for certain work.

Uses of Socio-Metric Technique


1. By studying the choice of students through socio-metric technique the teacher can determine
the nature and degree of social relationship existing among the students.
2. It is useful in identifying those who are isolated, the one who is not preferred by any other
individual.
3. It is also useful for identifying those who are liked by many others and who can be better leader
of the group. By working with them guidance can be provided.

52
4. Socio-metric technique is more useful with small groups. The position or status of the individual
is determined on the basis of some particular criterion.

5. It is a simple, economical and natural method of observational and data collection.

Limitations
1. A data of socio-metric tests seems so different from other kind of data.

2. The investigators or counsellors find it difficult to think of socio-metric measurement of indi-


viduals.

3. The rating of one person by others is an old practice.

4. There are certain traits or qualities that are very difficult to be measured and if at all they are
measured through observations or other tools the measurement may not be accurate and free
from subjectivity.

3.3.3 Steps for formation of group


A team cannot be expected to perform well right from the time it is formed. Forming a right team
is very important. It takes time, patience, requires support, efforts and members often go through
recognizable stages as they change from being a collection of strangers to a united group with common
goals. Tuckman presented a model of five stages Forming:

1. Orientation (Forming Stage)

(a) Members are discreet with their behaviour. Conflict, controversy, misunderstanding and
personal opinions are avoided even though members are starting to form impressions of
each other.
(b) This stage is characterized by members seeking either a work assignment (in a formal
group) or other benefit, like status, affiliation, power, etc. (in an informal group).
(c) At this stage, group members are learning what to do, how the group is going to operate,
what is expected, and what is acceptable.

2. Power Struggle (Storming Stage)

(a) The storming stage is where dispute and competition are at its greatest because now group
members have an understanding of the work and a general feel of belongingness towards
the group as well as the group members.
(b) This is the stage where the dominating group members emerge, while the less confronta-
tional members stay in their comfort zone.
(c) The next stage in this group is marked by the formation of dyads and triads. Members
seek out familiar or similar individuals and begin a deeper sharing of self.

3. Cooperation and Integration (Norming Stage)

(a) In this stage, the group becomes fun and enjoyable. Group interaction are lot easier, more
cooperative, and productive, with weighed give and take, open communication, bonding,
and mutual respect.
(b) If there is a dispute or disruption, it’s comparatively easy to be resolved and the group
gets back on track.
(c) Group leadership is very important, but the facilitator can step back a little and let group
members take the initiative and move forward together.

53
4. Synergy (Performing Stage)

(a) At this stage, the morale is high as group members actively acknowledge the talents,
skills and experience that each member brings to the group. A sense of belongingness is
established and the group remains focused on the group’s purpose and goal.
(b) Members are flexible, interdependent, and trust each other. Leadership is distributive and
members are willing to adapt according to the needs of the group.

5. Closure (Adjourning Stage)

(a) This stage of a group can be confusing and is usually reached when the task is successfully
completed. At this stage, the project is coming to an end and the team members are
moving off in different directions.
(b) The group decides to disband. Some members may feel happy over the performance, and
some may be unhappy over the stoppage of meeting with group members. Adjourning
may also be referred to as mourning.

3.3.4 Criteria for assessing tasks


In the present assessment and evaluation system, the criteria are decided by the teachers (in case of
internal evaluation) or the certifying agencies (in case of external board or university examination).
A student getting certain percent of marks or a particular grade is declared pass or fail. Such
criteria are not objective in nature, because, it lacks in describing the specific areas of development
of children. Even these criteria differ from institution to institution and teacher to teacher. The
examining bodies appoint a large number of examiners to evaluate the answer scripts which results
in inter-examiner variability in-case of essay type and short answer type questions. In marking
system, one percent mark can change the status of a student. (For example, a student with 49
percent marks has third division and with 50 percent marks, it is second division). Any effective
evaluation system must include specific, objective and description of the performance criteria such
as: ability in language acquisition, skill of analysing and comprehending Mathematics and other
concepts, description of life skill areas etc. for evaluating students’ performance. Criteria for the
assessment can be determined by teachers, students or through consultation between the two. Any
continuous and comprehensive evaluation system can be most successful when students are involved
in establishing their own criteria for assessment through consultation with teaching staff. These
criteria are then used to assess and grade the performance of the students. A clear understanding
of the intended learning outcomes of the subject is useful starting point for determining criteria for
assessment. Once these broader learning requirements are understood, a consideration of how the
learning task, and criteria for assessment of that task, fit into those broad requirements can then
follow. The criteria for the process and product of learning may be decided separately

3.3.5 Criteria for assessment of social skills in collaborative/ co-operative


learning situations
Collaborative learning is a method of teaching and learning in which students team together to
explore a significant question or create a meaningful project.

A group of students discussing a lecture or students from different schools working together over
the Internet on a shared assignment are both examples of collaborative learning.

Cooperative learning which will be the primary focus of this workshop, is a specific kind of
collaborative learning. In cooperative learning, students work together in small groups on a struc-
tured activity. They are individually accountable for their work, and the work of the group as a

54
whole is also assessed. Cooperative groups work face-to-face and learn to work as a team.

In order to create an environment in which cooperative learning can take place, three things are
necessary

1. First, students need to feel safe, but also challenged.

2. Second, groups need to be small enough that everyone can contribute.

3. Third, the task students work together on must be clearly defined.

During early childhood, SEL skills are organized around positive engagement with people and
the environment, managing emotions within social interactions, and remaining connected with adults
while successfully moving into the world of peers. These tasks can be difficult to navigate: young
children are often required to sit still or wait, attend, follow directions, approach group play, and get
along with others both at school and outside of school. SEL tasks then change radically for children
entering middle childhood. As children become aware of a wider social network, they learn to
navigate the sometimes - treacherous waters of peer inclusion, acceptance, and friendship. Managing
how and when to show emotion becomes crucial, as does knowing with whom to share emotionladen
experiences and ideas. Adolescents are expected to form closer relationships with peers; successfully
negoti ate a larger peer group and other challenges in the transition to middle and high school;
come to understand the perspectives of others more clearly than ever before; achieve emotional
independence from parents and other adults while maintaining relationships with them; establish
clear gender identity and body acceptance; prepare for adulthood; and establish a personal value
or ethical system and achieve socially responsible behaviour. In the academic realm, older children
and adolescents are required to become much more independent in their engagement with ever more
complex coursework, and to consider how their achievement is moving them toward independence.
SEL is therefore integral to a child’s development from preschool through adolescence and is often
related to his or her success in school.

3.4 Promoting Self-assessment and Peer assessment


3.4.1 Self-assessment
In psychology it is the process of looking at oneself in order to assess aspects that are important to
one’s identity. Motive that drives self-evaluation, self-verification, self-enhancement. Self-assessment
motive will seek information to confirm their uncertain self-concept rather than certain self-concept.
Self-assessment also helps in enhancing their certainty of their own knowledge. The goal of self-
assessment is to enable students to develop their own judgement.

1. Promotes the skills of reflective practice and self-monitoring.

2. Promotes academic integrity through student self-reporting of learning process.

3. Develops self-directed learning.

4. Increases student’s motivation.

5. They get to know their strength and weakness and the skills they have.

6. In this the students should assess both the process and the product of their learning. While
the assessment of the product is often the task of the teacher but it encourages student their
own work to understand their learning process.

7. It facilitates sense of ownership of one’s learning.

55
Drawbacks
1. Can be subjective since student may not be sincere and may over evaluate themselves.

2. Some students may under evaluate themselves.

3. Students may not interpret the criteria properly. Results may not be accurate if they don’t
know the criteria properly.

4. Could be time consuming.

Self-assessment is different from self-grading. It uses the evaluative processes in which judgment is
involved, where self-grading is marking of one’s work by the instructor. Students may initially resist
attempts to involve due to insecurities or lack of confidence in their ability to objectively evaluate
their own work.

3.4.2 Peer assessment


It is a type of collaborative learning technique where students evaluate the work of their peers and
have their own work evaluated by the peers. This dimension of assessment is significantly grounded
in theoretical approaches to active learning and adult learning.

1. Empower students to take responsibility for and manage their own learning.

2. Enable students to learn to access and give others constructive feedback.

3. Enhance students learning through knowledge diffusion and exchange of ideas.

4. Motivates students to engage with course material more deeply.

5. It reduces marking workload of the teacher.

6. Students can learn how they learn, what others expect, and what areas they should work on
to improve.

7. Students are actively engaged in learning and they may enhance learning.

8. More feedbacks can be generated by many students than 1 or 2 teachers.

9. Provide a chance to learn from each other.

Drawbacks
1. Can be negatively affected by group collision.

2. Grades provided can be unfair.

3. Time consuming for students.

4. Difficult for students who are struggling academically.

5. Student may chest or gang up against one member.

6. Peer pressure or friendship may affect grades given by the students.

Students can use peer assessment as a tactic of antagonism or conflict with other students by
giving unmerited low evaluation. They may give favourable evaluations to their friends.
Students can occasionally apply unsophisticated judgments to their peers. Ex: Students who are
shy, reserved and quieter may get low grades.

56
3.5 Portfolio assessment
It is a purposeful collection of student work that exhibits the students’ efforts, progress and achieve-
ment in one or more areas. The collection must include student participation in selecting contents,
the criteria for selection, the criteria for judging merit and evidence of student self-reflection.

1. To show student learning process [developmental portfolio]

2. To show samples of the students’ best work [showcase portfolio]

3. It is NOT a scrapbook, but a purposeful collection of anything worth considering.

3.5.1 Scope
Portfolio assessment enables students to reflect their real performance, to show their weak and strong
domain and to observe student’s progress during the learning process, and encourages students to
take responsibilities for their own learning. Since portfolio enable collecting information from dif-
ferent source such as students’ parents, friends, teachers, and him self, it provides teachers to have
reliable information about student. They are important tools for assessment of students’ learning
products and process.

Thus, portfolio has a potential which enables students to learn during assessment and to be as-
sessed during learning (to assess for learning and to assess of learning). Therefore, it should be
exactly applied in primary education for different courses such as Science and Technology, Mathe-
matics, Social Science to observe the students’ progress during the learning process and to provide
the required assistance depending on their performances.

3.5.2 Uses
1. Portfolio assessment matches assessment to teaching.

2. It has clear goals. In fact, they are decided on at the beginning of instruction and are clear to
teacher and students alike.

3. It gives a profile of learners’ abilities in terms of depth, breadth, and growth.

4. It is a tool for assessing a variety of skills not normally testable in a single setting for traditional
testing.

5. It develops awareness of students’ own learning.

6. Caters to individuals in a heterogeneous class.

7. Develops social skills. Students interact with other students in the development of their own
portfolios.

8. Develops independent and active learners.

9. Can improve motivation for learning and this achievement.

10. Provides opportunity for student-teacher dialogue.

57
3.5.3 Developing and assessing Portfolio
A portfolio assessment can be an examination of student selected samples of work experiences and
documents related to outcomes being assessed, and it can address and support progress toward
achieving academic goals, including student efficacy. Portfolio assessments have been used for large-
scale assessment and accountability purposes, for purposes of school-to-work transitions, and for
purposes of certification.

Portfolio assessments grew in popularity in the United States in the 1990s as part of a widespread
interest in alternative assessment. Because of high-stakes accountability, the 1980s saw an increase
in norm-referenced, multiple-choice tests designed to measure academic achievement. By the end of
the decade, however, there were increased criticisms over the reliance on these tests, which opponents
believed assessed only a very limited range of knowledge and encouraged a ”drill and kill” multiple-
choice curriculum. Advocates of alternative assessment argued that teachers and schools modelled
their curriculum to match the limited norm-referenced tests to try to assure that their students did
well, ”teaching to the test” rather than teaching content relevant to the subject matter. Therefore, it
was important that assessments were worth teaching to and modelled the types of significant teaching
and learning activities that were worthwhile educational experiences and would prepare students for
future, real-world success.

Involving a wide variety of learning products and artefacts, such assessments would also enable
teachers and researchers to examine the wide array of complex thinking and problem-solving skills
required for subject-matter accomplishment. More likely than traditional assessments to be multi-
dimensional, these assessments also could reveal various aspects of the learning process, including
the development of cognitive skills, strategies, and decision-making processes. By providing feedback
to schools and districts about the strengths and weaknesses of their performance, and influencing
what and how teachers teach, it was thought portfolio assessment could support the goals of school
reform. By engaging students more deeply in the instructional and assessment process, furthermore,
portfolios could also benefit student learning.

3.5.4 Developing of Rubric


Rubrics are sets of criteria or scoring guides that describe levels of performance or understanding.
They provide students with expectations about what will be assessed, standards that need to be met,
and information about where students are in relation to where they need to be.

Developing a rubric is a dynamic process. As the goals of instruction become clearer to the
teacher, the ability to define ranges and levels of execution within the processes of the active learning
experience will make the development of a rubric easier. Some teachers may require a ”run-through”
before they are ready to finalize a rubric With unfamiliar content it’s OK to write a rubric after the
fact and save it for future reference Even after a rubric is used, it may need modification.

Guidelines for Developing a Rubric


• Determine which concepts, skills, performance standards you are assessing.

• List the concepts and rewrite them into statements that reflect both cognitive and performance
components.

• Identify the most important concepts or skills being assessed in the task.

58
• On the basis of the purpose of the task, determine the number of points to be used for the
rubric (example: 4-point scale or 6-point scale)

• Starting with the desired performance, determine the description for each score, remembering
to use the importance of each element of the task or performance to determine the score or
level of the rubric .

• Compare student work to the rubric Record the elements that caused you to assign a given
rating to the work.

• Revise the rubric descriptions based on performance elements reflected by the student work
that you did not capture in your draft rubric.

• Rethink your scale Does a []-point scale differentiate enough between types of student work to
satisfy you?

• Adjust the scale if necessary Reassess student work and score it against the developing rubric.

59
Analysis, Interpretation, Reporting and Communicating of
4
Students’ performance

Contents
4.1 Interpreting students’ performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.1.1 Descriptive statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.2 Grading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.2.1 Concept of Grading System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.2.2 Merits of Grading System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.2.3 Types of Grading System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.3 Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.3.1 Characteristics of a Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.3.2 Merits of Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.3.3 Demerits of Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.4 Reporting students’ performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.4.1 Progress reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.4.2 Cumulative records . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.4.3 Profiles and their uses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.4.4 Portfolios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.4.5 Using descriptive indicators in report cards . . . . . . . . . . . . . . . . . . . . 70
4.4.6 Role of feedback to stake holders (students, parents, teachers) . . . . . . . . . 70
4.5 Identifying Strengths and weaknesses of Learners . . . . . . . . . . . . . . . . . . . . 72
5.1 Nov/Dec-2018 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.2 December-2019 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

4.1 Interpreting students’ performance


4.1.1 Descriptive statistics
Measures of central tendency
Measures of central tendency is the tendency of occurrence somewhere in the middle. Here, you are
representing the performance of the group as a whole by the single measure and enable you to compare
two or more groups in terms of their performance. It describes the characteristics of the given data.
Of the many averages, three have been selected as the most useful methods in educational research.
They are the mean, median and mode. Types of Measure of Central Tendency: Mean, Mode,
Median

60
1. Mean: The mean of distribution is commonly understood as the arithmetic average. It is
computed by dividing sum of all the scores by the number of measures. The formula is,
Σx
M=
N
where,
M = Mean
Σ = Sum of
X = Score in the distribution
N = Number of measures

2. Median: Median is the middle most point of a distribution or it is a midpoint in the given series.
In other words, in the distribution the half of the values lies below and above the midpoint. It
is measure of position rather than of magnitude. It is the 50th percentile point in the given
distribution. 1) When the number is odd: If we have an odd score and if no scores are repeated,
the median is the middle value. 2) When the number is even: If we have the even scores, the
median will become the average of the two middle values.

3. Mode: The mode is defined as the element that appears most frequently in a given set of
elements. Using the definition of frequency given above, mode can also be defined as the
element with the largest frequency in a given data set. For a given data set, there can be more
than one mode. As long as those elements all have the same frequency and that frequency is
the highest, they are all the modal elements of the data set.

Limitations of Mean, Median and Mode:

1. The Mean is the most stable measure of the central tendency, easy to understand and easy to
calculate. It lakes all values of data into consideration. It is the best measure to estimate the
population values from sample values.

2. The Median is the middle most point in the distribution, also the best measure of central
tendency and when extreme scores affect the mean, the best measure is the median and also
when the measure of the distribution is open ended, i.e., when the lower limit of the lowest
class 195 interval and upper limit of the highest class interval is not known. But like the mean,
the median cannot be subjected to mathematical operations.

3. The Mode is the easiest measure of central tendency to calculate and understand. It can
be identified by simple observation. It corresponds to the highest frequency of the frequency
which occurred more frequently in the distribution mode also cannot be applied for mathemat-
ical operations like the median. Mode can be used with nominal, ordinal and interval scales
of measures. Where mean and median can be used with interval or ratio scales of measurements.

Measures of variability
The average of the standard deviation of the measures from their mean is known as the variance.
i.e.,
Σx
σ2 =
N
σ 2 = Variance of the sample.
x = Deviation of the raw scores from the mean.

61
N = Number of measures.

The measures of central tendency give us the single central value representing their entire data
but fail to represent the deviations of the values in the distribution. We cannot make out anything
about the internal structure of the distribution. That is, how the scores are spread or scattered in a
distribution from a given point of measures of central tendency. It is, therefore, necessary to study
the variability of the scores in the distribution. In order to give a better shape to the given data it
is not enough to the find out the measures of central tendency but necessary to make detailed study
of the variability of the given data. These measures are known as second order measure, based on
the first order measures of mean, median and mode.

Types of Measures of Variability

1. Standard deviation is the most stable index of variability and used in research and experi-
mental studies. Very often this measure is used in all interpretations without which it is not
possible to predict the given data. It differs from the mean deviation in several respects. In
standard deviation we avoid the difficulty of signs by squaring the separate deviations and again
the squared deviation is used in computing this measure. The standard deviation is taken from
the mean and not from the median or mode. Therefore, the standard deviation is called the
root mean squared deviation and represented by the Greek Letter Sigma (σ).

(Find mean; subtract from X and square it; add X 2 and divide by N and take the square root).

Advantages of the Standard Deviation

• Takes into account all scores in the distribution.


• Provides a good description of variability.
• Tends to be the most reliable measure.
• Plays an important role in advanced statistical procedures.

Disadvantages of the Standard Deviation

• Very sensitive to extreme scores or outliers.


• Cannot be calculated with undeterminable or open-ended scores

2. Quartile Deviation: Mainly we consider three such quartiles which are denoted by Q1 , Q2
th
and Q3 . Q1 is called the first quartile, where 41 of the measures lie in the distribution below
it. Q2 is called the second quartile, and it is nothing but the median measure of a distribution.
where half of the measures lie in the distribution below it. Q3 is referred as the third quartile
th
or the upper quartile which divides the distribution in such a way that 34 of the measures lie
below that point. The quartile deviation Q is one- half of the scale distance between the 75th
percentile Q3 and 25th percentile Q1 in a frequency distribution. Thereby quarter deviation Q
is found from the formula.
Q3 − Q1
Q=
2

Advantages of the Quarter Deviation:

• Can be used with ordinal as well as interval/ratio data.


• Can be found even if there are undeterminable or open-ended scores at either end of the
distribution.

62
• Not affected by extreme scores or outliers.

Disadvantages of the Quarter Deviation:

• Does not take into account all the scores in the distribution.
• Does not play a role in advanced statistical procedures.

3. Range: Range is the simplest measure of variability. It is easy to understand and simple to
compute. It is the difference between the highest and the lowest scores of the distribution. It
is the most general measure of variability and it is computed when we wish to make a rough
comparison of two or more groups for variability. Range takes into account of the extremes
of the series of the scores only and it is very unreliable measures of variability. Because it
considers only the highest and lowest scores in the series and except these two score do not
reveal anything about other scores in the series.

The Range = Highest Score –Lowest Score

Advantages of the Range

• Easy to calculate.
• Can be used with ordinal as well as interval/ratio data.
• Encompasses entire distribution.

Disadvantages of the Range

• Depends on only two scores in the distribution and is therefore not reliable.
• Cannot be found if there are undeterminable or open-ended scores at either end of the
distribution.
• Plays no role in advanced statistics.

4. Rank correlation: We do not have the actual scores of students on an examination, but we
have only ranks or we are dealing with data which are heterogeneously distributed and the
scores are not very meaningful in such situation, we have to determine the correlation coeffi-
cient between the given variables the best method is to apply the Spearman Ranks Difference
Correlation Coefficient. Condition: p (Rho)- Date is in ranks or capable of being ranked for
both the variables X & Y. i. Calculation of Rank Difference Correlation The rank difference
method of coefficient of correlation stated by Spearman can be calculated by the following
formula.

6σD2
P =1−
N (N 2 − 1)

where,
N = Number of pairs
P = Rank difference correlation coefficient
D = Difference between two ranks assigned to the individual

It is the best coefficient correlation especially when the number of cases is less than 30 and the
data is in ranks or capable of being ranked.

Uses of Rank Difference Correlation

63
• This measure is especially useful when quantitative measures for certain factors cannot
be fixed but the individuals in the group can be arranged in order.
• A knowledge of this is helpful in educational and vocational guidance, prognosis, in the
selection of workers in office or factory and in educational decision making.

Graphical representation
1. Histograms: Another way of presenting the data by means of a graph is Histogram. His-
togram presents an accurate picture of the relative positions of the total frequency from one
interval to the other interval. The frequencies within each interval of Histogram are presented
by a rectangle, the base of which equals the length of the interval and height of which equals
the numbers of the scores within a given interval are presented by the midpoint of the class
interval. Whereas in as Histogram the scores are assumed to be spread uniformly over the
entire interval, the area of each rectangle is directly proportional to number of measures in
the interval. The other type of presenting the data is column diagram. Construction of His-
togram: The illustration, below, is a histogram showing the results of a final exam given to a
hypothetical class of students. Each score range is denoted by a bar of a certain colour. If this
histogram were compared with those of classes from other years that received the same test
from the same professor, conclusions might be drawn about intelligence changes among stu-
dents over the years. Conclusions might also be drawn concerning the improvement or decline
of the professor’s teaching ability with the passage of time. If this histogram were compared
with those of other classes in the same semester who had received the same final exam but who
had taken the course from different professors, one might draw conclusions about the relative
competence of the professors.

Some histograms are presented with the independent variable along the vertical axis and the
dependent variable along the horizontal axis. That format is less common than the one shown
here.

Steps to draw an histogram:

• Draw horizontal line at the bottom of a graph paper along which mark off units to represent
the class intervals better to start with class interval of lowest value.
• Draw a vertical line through the extreme end of the horizontal axis along which mark off
units to represent the frequencies of the class intervals. Choose a scale which will make
the largest frequency (the height of the y-axis) of the polygon approximately 75the x-axis.
• Draw rectangles with class units as base, such that the areas of rectangles are proportional
to the frequencies of the corresponding class intervals.

Uses: Histogram is the most popular graph used to represent continuous frequency distri-
bution. The width of the height of the rectangle are proportional to the length of the class
intervals, the graph thus formed by a series of such rectangles adjacent to one another is called
histogram. Thus the area of the histogram is proportional to the total number of frequencies
spread on all the class intervals.

2. Frequency curves: Cumulative Frequency Curve is called Ogive Curve. We convert the cu-
mulative frequencies into cumulative percentage frequencies and then plotting the graph with
the cumulative percentage frequencies corresponding to the class interval is what is called the
ogive. This curve differs from the cumulative frequency graph. In that cumulative frequency
graph frequencies are not graph to be expressed in the form of cumulative percent. Therefore,
in this graph the ogive the cumulative percent can be calculated by dividing each cumulative

64
frequency is going to be plotted. The conversion of cumulative frequencies into cumulative
percent can be calculated by dividing each cumulative frequency by N and multiplying by 100.
The Cumulative Frequency Curve is drawn in the same manner as that of the frequency polygon.

Construction of Cumulative Frequency Curve:

• Draw horizontal line at the bottom of a graph paper along which mark off units to represent
the class interval.
• Draw a vertical line through the extreme end of the horizontal axis along which mark off
the cumulative percentages corresponding to each class interval. Choose the scale again
which will make the 75% width of the axis. Join the points and so as to get the ogive as
shown in the figure.

Uses:

• Percentiles and percentile ranks may be determined quickly and accurately from the ogive
when the curve is carefully drawn and the scale divisions are precisely marked.
• A useful overall comparison of two or more groups is provided when Cumulative Frequency
Curve representing their scores is plotted upon the same horizontal and vertical axis.
• Percentile norms are determined directly from Cumulative Frequency Curve.

4.2 Grading
4.2.1 Concept of Grading System
The usual practice of assessment in schools is through conducting examinations. One of the major
drawbacks of our examination system is reporting students’ performance in terms of marks. In order
to minimize the limitations of present day examinations system, a major reform concerns transform-
ing the marking system into a grading system.

Grading is a process of classifying students based on their performance into groups with the help
of predetermined standards, expressed in a symbolic form i.e., letters of English alphabet. As these
grade and corresponding symbols are pre-determined and well defined, all the stakeholder would
understand them uniformly and consistently. While developing the grading system, it is of utmost
significance that the meaning of each grading symbol be clearly spelt out. In spite of strict ad-
herence to the pre – determined stipulations, there may be inter examiner and intra – examiner
variations. Sometimes the grade awarded may be compared within and between groups. In this
type of comparison not only the grades awarded by a particular teacher but also the grades awarded
by different teachers would be compared. This helps in ascertaining the position of students with
reference to a group. Comparing grades awarded by a single teacher (intra-group) and by, different
teacher (inter-group) with reference to a larger group is considered as norm-referenced. This would
help in location the position of a student in a larger group. Hence, norm-referenced measures would
help in comparing the grades awarded by different teachers and institutions. Thus, the grades may
be used for communicating the students’ performance with reference to specified criteria and also
the relative position of students with reference to their peer group.

4.2.2 Merits of Grading System


Due to over-emphasis on examinations, both teaching and learning have become examination- cen-
tred. Teachers teach for examinations and students learn for examinations. Award of marks and
declaration of results has become the main purpose of schooling. Actually, Examinations are meant

65
to examine the process of learning. They help teachers to locate learning variations among children.
Examinations also aim at helping children estimate their learning performance and accordingly im-
prove their proficiencies. But these idealistic purposes of examinations have taken a back seat.
Securing marks rather than improving the levels of that attainment has become the main objective
of students. Teaching is a deliberate process of achieving instructional objectives and evaluation
is a means of estimating the extent of their accomplishment. But due to the prevalence of marks
consciousness, attainment of marks rather than assessment of instructional objectives has become all
important.

• As grading involves grouping the students according to their attainment levels, it helps in
categorizing the students as per their attainments of instructional objectives also.

• One of the significant arguments in favour of the grading system is that it creates favourable
conditions for classification of students’ performance on a more convincing and justifiable scale.

• In order to understand why grading is a better proposition than the marking system, it is
necessary to look closely into the various procedures of scaling.

• Grading is a far more satisfactory method than the numerical marking system.

• The justification for the superiority of grading system over marking system is that it signifies
individual learner’s performance in the form of a certain level of achievement in relation to the
whole group.

4.2.3 Types of Grading System


On the basis of the reference point of awarding grades, grades are classified as Direct and Indirect,
it is also divided into two as Absolute and Relative. The reference point in former classification is
an approach and in the latter, a standard of judgment. Absolute and relative grading come under
indirect grading. For better understanding of these their scheme of classification is depicted in the
following above figure.

• DIRECT GRADING: The process of assessing students’ performance qualitatively and express-
ing it in terms of letter grades directly is called direct grading. This type of grading can be used
for assessment of students’ performance in both scholastic and co- scholastic areas. However,
direct grading is mostly preferred in the assessment of co-scholastic learning outcomes. While
evaluation co-scholastic learning outcomes, the important factors are listed first and then a
student’s performance is expressed in a letter grade. This type of grading minimizes inter-
examiner variability and is easy to use when compared to indirect grading. Direct grading has
a limitation that it does not have transparency and diagnostic value and does not encourage
competition to the extent required.

• INDIRECT GRADING: In indirect grading, student performance is first assessed in terms of


marks and then they are transformed into letter grades. Different modes may be followed while
transforming the marks into grades. On the basis of the mode of transformation of marks into
grades, there are two types of grading, viz. absolute grading and relative grading. The meaning
and relevance of these two types of indirect grading are explained below.

• ABSOLUTE GRADING: Let us now examine the methodology of awarding grades in terms
of absolute standards. As has been pointed out earlier, absolute grading is based on a pre-
determined standard that becomes the reference point for students’ performance. In absolute
grading, the marks are directly converted into grade on the grades on the basis of a pre-
determined standard.

66
Absolute grading can be on a three- point, five- point or nine point scale for primary, up-
per primary and secondary stages respectively.

– Three-Point Scale: Students are classified into three groups as above average, average and
below average on the basis of pre-determined range of score as shown in below table.
– Five- Point Scale: Students are classified into five groups, distinction, first division, second
division, third division and unsatisfactory on the basis of pre-determined range of score
as shown in below table.
– Nine- Point Scale: In absolute grading the range of absolute marks or percentage of marks
need not necessarily be of equal size. The range of marks as a pre-determined standard for
classifying students into different groups may be taken as arbitrary. In a ninepoint grading
scale, the students may be classified into nine groups, namely, outstanding, excellent, very
good, good, above average, below average, marginal and unsatisfactory. An example of
nine-point absolute grading is provided in below table.

Merits of Absolute Grading

– Negative effects of pass/ fail eliminated.


– No grade signifies failure of students.
– Simple and straight forward.
– Meaning of each grade is distinctively understandable.
– Students have the freedom to strive for highest possible grade.
– No complications.
– Easy for teachers to award grades as per pre-determined range of marks.

4.3 Norms
It is a preliminary test for comparing achievement of an examinee to a large group of examinees at the
same grade. The representative group is known as Norm group. Norm referenced test is a test design
to provide a measure of performance that is interpretable in terms of an individual’s relative standing
in some known group. Norm group may be made up of examinees at the local level, district level,
state level or national level. Since the development of norm-referenced tests is expensive and time
consuming. Bormuth (1970) writes that Norms is to measure the growth in a student’s attainment
and to compare his level of attainment with the levels reached by other students and norm group.

4.3.1 Characteristics of a Norm


• Its basic purpose is to measure student’s achievement in curriculum based skills.

• It is prepared for a particular grade level.

• It is administered after instruction.

• It is used for forming homogeneous or heterogeneous class groups.

• It classifies achievement as above average, average or below average for given grade.

• It is generally reported in the form of Percentile Rank, Linear Standard Score, Normalized
Standard Score and grade equivalent.

67
4.3.2 Merits of Norms
• To make differential predictions in aptitude testing.

• To get a reliable rank ordering of the pupils with respect to the achievement

• To identify the pupils who have mastered the essentials of the course more than others.

• To select the best of the applicants for a particular programme.

• To find out how effective a programme is in comparison to other possible programmes.

4.3.3 Demerits of Norms


• Test items answered by the students are not included in these test items because of their
inadequate contribution to response variance.

• There is lack of congruence between what the test measures and what is stressed in a local
curriculum.

• This promotes unhealthy competition and injurious to self-concepts of low scoring students.

Norm-referenced measurement is the traditional class based assignment. The measurement act
relates to some norm, group or a typical performance. It is an attempt to interpret the test results
in terms of performance of a certain group of students. So, this group is a norm group test scores.
Thus norm-referenced test typically attempts to measure more general category of competencies.

4.4 Reporting students’ performance


4.4.1 Progress reports
A critical element of any student’s learning experience is the need for informed and meaningful
feedback to those invested in the student’s progress. Reporting on student progress must have a
well-defined purpose for it to be meaningful. It must clearly identify the information needing to be
communicated, the audience it is intended for and how that information will be used to improve
future or related learning.

Three primary purposes for reporting student progress:

1. To communicate student growth to parents and the broader community.

2. To provide feedback to students for self-evaluation.

3. To document student progress and the effectiveness of instructional programs.

Because reporting student progress serves a variety of purposes, we believe no one method of reporting
is capable of serving all purposes well. A multi-faceted comprehensive reporting system is essential.
Multiple means of reporting progress is divided into two subsets, individual and whole school reports.

Within these subsets, the means for reporting may include but are not limited to:

68
1. Individual Subset - report cards, progress reports, standardized testing, evaluated projects
and assignments, portfolios and exhibitions of student work, homework, individual web pages,
parent-teacher conferences, student-teacher conferences and student led conferences.

2. Whole School Subset- Standardized testing, open houses, classroom and school-wide newslet-
ters, each means of reporting on student progress will include a statement of purpose. The
statement of purpose may vary according to the specific type of reporting taking place and the
audience it is directed toward.

4.4.2 Cumulative records


This is longitudinal record of pupils’ educational history. The progress of the development pattern
of each student is recorded cumulatively from period to period in a comprehensive record designed
for the purpose. Such a record is known as a cumulative record.

Elements of a Cumulative Record

1. Data on achievement in various subjects of study

2. Physical development

3. Health matters

4. Participation in co-curricular activities

5. Special achievements

6. Personal details

4.4.3 Profiles and their uses


An outline of something, especially a person’s face, as seen from one side. A short article giving
a description of a person or organization. Describe (a person or organization) in a short article.
Represent in outline from one side.

• Focus on knowing your students and helping students know themselves: Before
diving into selecting a template, think about what learner profiles are for and how you will use
them. A template that is created just for you as the teacher is very different than a template
that is designed to help students understand themselves as learners.

• Think differently about data: Learner profiles can be an entirely new take on the idea of
data notebooks. Why not let students use these to track their own progress, reflect on their
learning styles and strengths, and set individual academic and nonacademic goals?

• Give the work back: Learner profiles do not need to be one more thing you have to do as a
teacher. You don’t have to create 30 binders. Think about how you could help students create
their own learner profiles.

• Revisit and revise: Over the course of the year, students are going to change and grow.
Allow space for them to record self-reflections on a regular basis.

69
4.4.4 Portfolios
• Portfolios remain quite popular in education coursework and with administrators evaluating
senior teachers. One reason might be that the portfolio is a very subjective form of assessment.
For anyone uncomfortable without a grading key or answer sheet, subjective evaluation can be
a scary task. Secondly, teachers often are unsure themselves of the purpose of a portfolio and
its uses in the classroom. Third, there is a question of how the portfolio can be most effectively
used to assess student learning.

• It also is important – especially if you plan to use the portfolio as a major grade for your course
– that you get another teacher to help with the evaluations. That ensures that your assessment
is reliable. Teachers often cut some slack for less academically inclined students, while holding
others to higher standards. The two scores then can be averaged to get a final grade. That will
show you and the student a more accurate assessment of their work products. Finally, student
involvement is very important in the portfolio process. It is vital that students also understand
the purpose of the portfolio, how it will be used to evaluate their work, and how grades for it
will be determined.

4.4.5 Using descriptive indicators in report cards


• The Central Board of Secondary Education (CBSE), for instance, through the Continuous
and Comprehensive Evaluation (CCE) has come out with specific guidelines called “descriptive
indicators” for teachers on how to comment on a student’s performance in both scholastic and
non-scholastic areas. The teachers’ manual asks them not to “label learners as slow, poor or
intelligent.” It also cautions them against making comparisons or giving negative statements.

• For teachers who are not proficient in the language, the descriptive indicators provide an
appropriate choice of words.

• Teachers say that earlier parents of students who are weak in academics would get angry and
disappointed on seeing their ward’s report card, since the remarks were mainly on academics.

• When teachers mark students as average or poor, there is no systematic scaling. So, teachers
should give encouraging observations.

• Parents too seem to be happy. Rather than receiving feedback about academics alone, we are
now able to understand where my children stand in each area and identify their strengths.

• However, some teachers are of the opinion that such a system is time consuming and would
have a “converse impact” on the child. The students would not be able to take criticism in the
right sense, they say.

4.4.6 Role of feedback to stake holders (students, parents, teachers)


A stakeholder is anyone who is involved in the welfare and success of a school and its students,
including administrators, teachers, staff, students, parents, community members, school board mem-
bers, city councillors and state representatives. Stakeholders may also be collective entities, such as
organizations, initiatives, committees, media outlets and cultural institutions. They have a stake in
the school and its students, which means they have personal, professional, civic, financial interest or
concern in the school. Stakeholder engagement is considered vital to the success and improvement of
a school. The involvement of the broader community of the school with it can improve communica-
tion and public understanding and allows for the incorporation of the perspectives, experiences and
expertise of participating community members to improve reform proposals, strategies, or processes.

70
• Students- Feedback is any response made in relation to students’ work or performance. It can
be given by a teacher, an external assessor or a student peer. It is usually spoken or written.
Feedback is most effective when it is timely, perceived as relevant, meaningful and encouraging,
and offers suggestions for improvement that are within a student’s grasp. It is intended to ac-
knowledge the progress students have made towards achieving the learning outcomes of a unit.
Good feedback is also constructive, and identifies ways in which students can improve their
learning and achievement. Providing a mark or a grade only, even with a brief comment like
”good work” or ”you need to improve” is rarely helpful. Here are some common examples of
feedback that is not helpful to students. It is widely recognized that feedback is an important
part of the learning cycle, but both students and teachers frequently express disappointment
and frustration in relation to the conduct of the feedback process. Students may complain that
feedback on assessment is unhelpful or unclear, and sometimes even demoralizing. Addition-
ally, students sometimes report that they are not given guidance as to how to use feedback to
improve subsequent performance. Even worse, students sometimes note that the feedback is
provided too late to be of any use or relevance at all. For their part, lecturers frequently com-
ment that students are not interested in feedback comments and are only concerned with the
mark. Furthermore, lecturers’ express frustration that students do not incorporate feedback
advice into subsequent tasks.

Good Feedback Principles:

– Promote dialogue and conversation around the goals of the assessment task
– Emphasize the instructional aspects of feedback and not only the correctional dimensions.
– Remember to provide feed forward indicate what students need to think about in order
to bring their task performance closer to the goals
– Specify the goals of the assessment task and use feedback to link student performance to
the specified assessment goals
– Engage the students in practical exercises and dialogue to help them to understand the
task criteria
– Engage the students in conversation around the purposes of feedback and feed forward
– Design feedback comments that invite self- evaluation and future self- learning manage-
ment
– Enlarge the range of participants in the feedback conversation - incorporate self and peer
feedback

• Parents- A review process of the new reporting resources was carried out with a number of
schools. Schools that reviewed the materials found them useful and easy to follow. They be-
lieved that the materials signalled a desirable paradigm shift in reporting to parents.

In particular, the following aspects of the materials were highly valued by schools:

– The principles were seen as clear and appropriate.


– Examples illustrating what parents can do at home were seen as useful for either school
reports or school newsletters.
– National standards clarifications were welcomed, considered “overdue” and seen as clear
and useful for both teachers and parents.
– The information sharing process diagram was seen as “helpful and well-constructed”.
– The example of key competencies reporting was seen as useful. The quotes below provide
a flavour of the positive feedback from schools.

71
– There is a big gap between what schools are providing in the way of feedback and what
parents actually want.
– Parents don’t feel they are getting the right information in a timely manner to support
and coach their children
– Parents commented that the feedback they currently receive is too late to action as the
moment in time has passed.
– Parents prefer reporting based on their child’s progression rather than measurement
against a benchmark (despite popular belief). This reflects the need for progressive report-
ing using a method such as the Hattie feedback and reflection model. Parent Involvement
– Parental involvement decreases dramatically as a child progresses through education.
– Other family support decreases dramatically as a child progresses through education.
– Schools which integrate social activities and teamwork into the curriculum (not just by
making the kids play sport) have happier parents/students.
– Students who participate in task reflection with their parents on a weekly basis are more
likely to be a grade average student than students that participate in task reflection on a
less frequent basis.
• Administrator- To assess student progress toward the established district standards and to
facilitate the planning of various types of instruction, administration should ensure that teach-
ers are utilizing information from a variety of valid and appropriate sources before they begin
planning lessons or teaching. This could include data regarding students’ backgrounds, aca-
demic levels, and interests, as well as other data from student records to ascertain academic
needs and to facilitate planning appropriate initial learning. It is important for the adminis-
tration to note that information regarding students and their families is used by the staff for
professional purposes only and is kept confidential as a matter of professional ethics. Adminis-
trators should determine if teachers are using the numerous formative and summative diagnostic
processes available to assist in planning meaningful instruction. Formative measures include
ongoing teacher monitoring of student progress during the lessons, practice sessions, and on
daily assignments. Measures administered periodically like criterion-referenced tests, grade
level examinations or placement tests that are teacher-made or part of districtadopted mate-
rial, also provide helpful information on the status of student learning as instruction progresses.
Summative measures like minimum competency examinations, district mastery tests and stan-
dardized tests provide a different perspective from the ongoing formative measures. This 145
type of data enables the teacher to evaluate the long-term retention rate of their students and
to compare student learning on a regional, state or national basis. The administrators should
verify that teachers are preparing and maintaining adequate and accurate records of student
progress. This will include the regular and systematic recording of meaningful data regarding
student progress on specific concepts and skills related to the standards for each subject for
the grade level or course they are teaching. Once students’ success levels have been identified
from the records, the teacher should use the information to plan instruction and any necessary
remediation and enrichment. By utilizing ongoing information on achievement, teachers can
maintain consistent and challenging expectations for all students. Students and parents should
be informed of the students’ progress toward achieving district goals and Objectives through
comments on individual work, progress reports, conferencing, report cards and other measures.
Students should be encouraged to participate in self-assessment as a way of motivating students
to improve academic achievement.

4.5 Identifying Strengths and weaknesses of Learners


It’s what every teacher dreads: A student who is falling farther and farther behind. For whatever
reason, classroom instruction does not seem to be enough for the student. Teachers face this dilemma

72
frequently, despite the teacher’s experience.

To help a struggling student, it may be best to analyze the student’s strengths and weakness. This
requires that the student feel comfortable with the teacher and is able to express his feelings clearly.
After analyzing the student’s strengths and weaknesses, a teacher can develop a plan to help him.

Chose a comfortable setting to talk the student. Avoid having the discussion around the student’s
peers as students are often shy and nervous about revealing their feelings around their friends. Indi-
vidual attention may work best.

Start a general conversation. Ask how she is and if she’s having any problems. These questions
may reveal strengths and weaknesses on their own.

Ask the student where he excels. Have him elaborate on this skill. Ask detailed questions. Take
notes as the student speaks, but maintain eye contact to avoid alarming him. Move to a tangential
subject or skill after the student has described the first thing at which he excels. For example, if the
student says he likes computers, ask him about specific software.

Ask her about what she thinks she could improve. Avoid overtly criticizing her. Instead, prompt
her for self-criticism. Take notes. These criticisms will often be weaknesses.

73
Previous Years’ Question Papers
5
5.1 Nov/Dec-2018
1. (a) Explain the terms - Assessment, Evaluation and Examination.
(b) Classify different forms of assessment based on the purpose and define each one of them.
[3+7=10]

OR

2. (a) Distinguish between assessment for learning ’and assessment of learning with suitable
illustrations.
(b) Briefly summarize the recommendations of NCF-2005 on assessment and evaluation. [5+5=10]

3. (a) Describe the steps involved in planning and construction of an achievement test.
(b) Explain the characteristics of a good test. [6+4=10]

OR

4. (a) Discuss the concept of cognitive, affective and psychomotor domains of learning in assess-
ment and evaluation.
(b) Describe the guidelines for constructing various objective type items with suitable illus-
trations. [5+5=10]

5. (a) Examine the importance of assessing students performance continuously and comprehen-
sively.
(b) What are anecdotal records? Explain how it is useful for a classroom teacher. [5+5=10]

OR

6. (a) What are anecdotal records? Explain how it is useful for a classroom teacher.
(b) Discuss the need of assessing social qualities of students in a classroom. Explain the
procedure adopted for the same. [4+6=10]

7. (a) Explain the meaning of different measures of central tendencies with their uses for a
classroom teacher.
(b) Describe the different procedures used in reporting student’s performance in classroom.
[6+4=10]

74
OR

8. (a) Compute arithmetic Mean for the following data (N=80)

Class interval 0-9 10-19 20-29 30-39 40-49 50-59 60-69 70-79 80-89 90-100
Frequency 2 3 4 8 9 11 15 13 10 5

(b) Discuss different types of grades and their uses with suitable illustrations. [7+3=10]

9. Answer any two of the following [2×5=10]

(a) School Based Assessment


(b) Quality Assurance in tools
(c) Assessment of group process
(d) Role of feedback to Stakeholder

5.2 December-2019
1. (a) Explain briefly the purpose of assessment and evaluation in classrooms
(b) How can assessment and evaluation ensure the quality of education? [5+5=10]

(a) Distinguish between assessment of learning and assessment for learning.


(b) What are standardized tests? Explain the characteristics of standardized tests.
(c) Examine the role of criterion Referenced assessment in ensuring attainment of learning
outcomes.
[2+3+5=10]

OR

2. (a) Describe the relationship among educational objectives, learning experiences and evalua-
tion.
(b) What is validity? Explain the methods of estimating the validity of an achievement test.
[5+5=10]

(a) How does the table of specifications ensure the quality of an achievement test?
(b) Elucidate the steps involved in the construction of a diagnostic test
(c) Explain the criteria used for assembling the test items. [2+3+5=10]

3. Examine the importance ofself assessment and peer assessment in classrooms.

4. What are process oriented tools and techniques? Explain any one of them with suitable illus-
trations. [5+5=10]

(a) What are rubrics? Explain their uses.


(b) Describe the criteria used in assessment of social skills in a collabrative learning situation.
(c) Discuss the significance of school based assessment what problems are reported by teachers
in implementing it? [2+3+5=10]

75
(a) Describe different measures of variability. Explain the steps involved in computing the
best measure of variability with suitable illustrations.
(b) Discuss the different procedures adopted for reporting student’s performance in class-
rooms.
[5+5=10]

OR

5. Comupute the arithmetic mean for the following scores.

22, 11, 14, 23, 09, 30, 40, 35, 08, 23, 17, 31, 27, 45, 31, 50

6. What is grading? Discuss the types and advantages of grades.

7. Discuss the steps involved in computing the rank correlation. Find out the spearman’s rank
correlation for the following data.

Scores A B C D E F G H I
Judge - I 25 15 30 40 25 50 45 10 35
Judge - II 30 20 15 35 20 50 40 20 25

[2+3+5=10]

8. Answer any two of the following [2 × 5=10]

(a) Achievement surveys


(b) Psychomotor domain of learning
(c) Socio-metric techniques.
(d) Norms and its uses

76

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy