0% found this document useful (0 votes)
15 views

Lecture 07__Monitoring and Evaluation

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Lecture 07__Monitoring and Evaluation

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Course title: Fundamentals of Agricultural Extension Education

Course code: UEXT 106


Lecture no: 07
Lecture topic: Monitoring and Evaluation
Prepared by: Bhagirath Das, Scientist (Agricultural Extension)

1. Monitoring
Extension programmes are mostly funded with public money and are planned and implemented
by an organization which in most cases is a department of a government. In order to justify the
appropriation of public funds and continuing support from the people, it is necessary that their
management as well as impact be properly and adequately monitored and evaluated from time to
time. Monitoring and Evaluation is a process of continued gathering of information and its
analysis, in order to determine whether progress is being made towards pre-specified goals and
objectives and highlight whether there are any unintended (positive or negative) effects from a
project/programme and its activities.
Monitoring is a continuous process of collecting, analyzing, documenting, and reporting
information on progress to achieve set project objectives. It helps identify trends and patterns,
adapt strategies and inform decisions for project or programme management. Thus, monitoring is
a management function and begins with the start of a project and ends with the completion of a
project.
Methods and Tools of Monitoring
Methods Tools
Observation: Regular visits to project sites to Progress Reports: Regularly updated reports
observe activities, processes, and immediate detailing the status of activities, achievements,
outcomes. and issues.
Surveys and Questionnaires: Regularly Checklists: Structured lists used to ensure all
distributed to gather feedback from necessary tasks are completed and standards
beneficiaries and stakeholders. are met.
Interviews: Conducted with project staff, Management Information Systems (MIS):
beneficiaries, and other stakeholders to gather Digital systems to collect, store, and analyze
qualitative data. data for real-time monitoring.
Focus Group Discussions (FGDs): Small Dashboards: Visual tools to provide quick
group discussions to gain insights into overviews of key performance indicators
participants' views and experiences. (KPIs) and project status.
Administrative Data Review: Analyzing Logbooks: Records maintained by project
records and reports generated by the project for staff to document daily activities and
tracking progress. observations.
Participatory Methods: Engaging Monitoring Plans: Detailed documents
community members in the monitoring outlining what will be monitored, how, and by
process to ensure their perspectives are whom.
included.

Page | 1
Characteristics of Monitoring
✓ Conducted continuously
✓ Keeps track and maintains oversight
✓ Documents and analyzes progress against planned program activities
✓ Focuses on program inputs, activities and outputs
✓ Looks at processes of program implementation
✓ Considers continued relevance of program activities to resolving the health problem
✓ Reports on program activities that have been implemented
2. Evaluation
Evaluation is a periodic assessment, as systematic and objective as possible, of an ongoing or
completed project, programme or policy, its design, implementation and results. It involves
gathering, analysing, interpreting and reporting information based on credible data. The aim is to
determine the relevance and fulfilment of objectives, developmental efficiency, effectiveness,
impact and sustainability.
Characteristics of Evaluation
✓ Conducted at important program milestones
✓ Provides in-depth analysis
✓ Compares planned with actual achievements
✓ Looks at processes used to achieve results
✓ Reports on how and why results were achieved
Purpose/Importance of Monitoring and Evaluation
➢ Support project/programme implementation with accurate, evidence-based reporting that
informs management and decision-making to guide and improve project/programme
performance.
➢ Contribute to organizational learning and knowledge sharing by reflecting upon and
sharing experiences and lessons.
➢ Uphold accountability and compliance by demonstrating whether or not our work has been
carried out as agreed and in compliance with established standards and with any other
stakeholder requirements
➢ Provide opportunities for stakeholder feedback
➢ Promote and celebrate project/programme work by highlighting accomplishments and
achievements, building morale and contributing to resource mobilization.
➢ Strategic management in provision of information to inform setting and adjustment of
objectives and strategies.
➢ Build the capacity, self-reliance and confidence stakeholders, especially beneficiaries and
implement staff and partners to effectively initiate and implement development initiatives.

Page | 2
Type of Evaluation
1. Formative evaluation: Evaluates a programme/project during the development stage in order
to make modifications to help improve the programme. Formative evaluations concentrate on
examining and changing processes as they occur. It provides timely feedback about program
services and allows to make program adjustments “on the fly” to help achieve program goals.
2. Process evaluation: It is used to assess how a program is being implemented, including factors
such as participation rates, the quality of delivery, and the degree to which the program is being
implemented as intended. Process evaluation, as outlined by Hawe and colleagues will help answer
questions about your program such as:
✓ Has the project reached the target group?
✓ Are all project activities reaching all parts of the target group?
✓ Are participants and other key stakeholders satisfied with all aspects of the project?
✓ Are all activities being implemented as intended? If not, why?
3. Summative evaluation: It occurs at the end of a program cycle and provides an overall
description of program effectiveness. Summative evaluation examines program outcomes to
determine overall program effectiveness. Summative evaluation is a method for answering some
of the following questions:
✓ Were your program objectives met?
✓ Will you need to improve and modify the overall structure of the program?
✓ What is the overall impact of the program?
4. Outcome evaluation: Outcome evaluation is used to measure the immediate effect of the
program and is aligned with the programmes objectives. It measures how well the programs
objectives (and sub-objectives) have been achieved. Outcome evaluation will help answer
questions such as:
✓ How well has the project achieved its objectives (and sub-objectives)?
✓ How well have the desired short-term changes been achieved?
5. Impact evaluation: Impact evaluation is concerned with the long-term effects of the program
and is generally used to measure the program goal. Consequently, it measures how well the
program goal has been achieved. Impact evaluation will help answer questions such as:
✓ Has the overall program goal been achieved?
✓ What, if any factors outside the program have contributed or hindered the desired change?
Indicators for evaluation
Indicators are necessary to help determine what data needs to be collected to assist in assessing the
progress of the program and if it is on track to achieving its goals and objectives. Indicators for
evaluation are classified into two categories:

Page | 3
1. Process indicators
Process indicators monitor the implementation of the program as well as program inputs. Program
input indicators are related to financial resources, human resources, administrative resources and
equipment required.
Process indicators for the program itself monitor how well the program is implemented, if it is
reaching the intended target and if it is of an acceptable quality. Program reach indicators include:
✓ Number of participants
✓ Proportion of the target population participating in the program
✓ The proportion of that participants attend or are involved
✓ Dropout rate
✓ Number of key stakeholders involved
2. Outcome and Impact indicators
Outcome indicators monitor the progress of achieving the program’s objectives, which usually
relate to some type of short-term changes. Outcome indicators will usually relate to changes in
knowledge, attitudes and intended behaviour. Impact indicators are used to assess if the program
goal has been achieved and are therefore more likely to include actual behaviours, health status
and quality of life (longer term changes or changes sustained over time).
Outcome indicators may include:
✓ Changes in awareness, knowledge and skills
✓ Changes in intended behaviour
✓ Changes in individual capacity, i.e. confidence, self-esteem, social skills, problem solving
skills etc
Impact indicators may include:
✓ Increased mental wellbeing
✓ Increased physical wellbeing
✓ Community engagement
✓ Improved Quality of Life
✓ Increased employment
Difference between Monitoring and Evaluation
Aspect Monitoring Evaluation
Definition Ongoing process of systematically Systematic and objective
collecting, analyzing, and using assessment of an ongoing or
information to track progress towards completed project, program, or
objectives. policy.
Purpose Ensure activities are implemented as Determine relevance,
planned; informed decision-making, course efficiency, effectiveness,
correction impact, and sustainability;

Page | 4
support learning and
accountability.
Frequency Regular, Ongoing (e.g., daily, weekly, Periodic (e.g., mid-term, end-
monthly) term, post-project)
Focus Input, process and output Outcomes and impacts, both
intended and unintended.
Timeframe Short-term, throughout the project or Medium to long-term, often
program lifecycle. after significant portion or all of
the project has been completed.
Done by Staffs within the same agency External bodies or agencies
Source of Internal documents i.e. Monthly or Internal and external documents
information quarterly reports, minutes of meetings i.e. Annual reports, national
statistics, consultant’s reports
Points of data Multiple points At intervals only
collection

Methods and Tools of Evaluation


Methods Tools
Surveys: Structured instruments used to Evaluation Frameworks: Structured plans
collect data from a large number of outlining the evaluation's objectives,
respondents. questions, methodologies, and timelines.
Interviews: In-depth discussions with key Surveys and Questionnaires: Tools for
informants, stakeholders, and beneficiaries to collecting quantitative data from a wide range
gather detailed information. of respondents.
Focus Group Discussions (FGDs): Group Interview Guides: Structured outlines to
discussions used to explore specific topics in ensure consistency and comprehensiveness in
depth. interviews.
Case Studies: In-depth examination of Data Analysis Software: Tools such as SPSS,
specific instances or examples within the STATA, R, or NVivo to analyze quantitative
project to illustrate broader trends. and qualitative data.
Document Review: Analysis of project- Logic Models/Theory of Change: Visual
related documents, reports, and literature. representations that link project activities to
Impact Assessments: Methods to measure the expected outcomes and impacts.
long-term effects and changes resulting from Scorecards: Tools to rate and compare
the project. performance against predefined criteria.
Cost-Benefit Analysis: Comparing the costs Evaluation Reports: Comprehensive
of the project against its benefits to assess its documents that present the findings,
economic efficiency. conclusions, and recommendations from the
Randomized Controlled Trials (RCTs): evaluation.
Experimental method to assess the impact of Impact Evaluation Tools: Specific
interventions by comparing outcomes between methodologies like Difference-in-Differences
a treatment group and a control group. (DiD), Propensity Score Matching (PSM), and
regression analysis to assess impacts.

Page | 5
Steps involved in an extension programme evaluative process may be as follows:
1. Formulate evaluation objectives: Specific objectives to be achieved through the evaluative
process must be clearly and adequately identified and started. All further efforts should be
knit around these objectives.
2. Identify indicators: To identify indicators beneficiaries of the programme is identified and
the kind of behavioural changes expected in them needs to be stated clearly.
3. Decide the kind of information needed: After identifying the indicators for evaluating a
program's management and performance, specific information to be collected should be
determined. Given the volume of potential data, an extension worker must carefully choose
the type and amount of information to collect.
4. Sampling: The purpose of sampling is to take a relatively small number of units from a
population in such a way that the evidence collected from them becomes representative
evidence of the entire population.
5. Decide the design of evaluation: While experimental designs are ideal for evaluation,
extension programs rarely allow for this. Instead, ex-post facto evaluation is often more
practical.
6. Collection and analysis of evaluation evidence: There are many methods for collecting
information for evaluative purposes, such as the mail questionnaire, personal interview,
distributed questionnaires, group interviews, case studies, systematic field observations,
systematic study of secondary data etc. Selection of the right kind of data collection method
will depend on the objectives of the evaluation, kind of information needed, time and
resources available.
7. Interpretation of the results: The evaluation results must clearly state the achievements,
failures and future adjustments needed. A written report of the evaluation findings should
be prepared and made available to all concerned.
Challenges of Evaluation
1. Assessing long term impacts: Determining the long-term consequences of a program can
be difficult due to time constraints, external factors influencing outcomes, and the
challenge of isolating program effects from other variables.
2. Dealing with uncertainty: Evaluation often involves complex systems with multiple
interacting factors, making it challenging to establish clear cause-and-effect relationships
and predict future outcomes with certainty.
3. Reconciling different agendas: Stakeholders often have varying interests and
expectations, making it difficult to align evaluation goals and priorities. Balancing
competing demands can be challenging.
4. Needing to simplify what is complex: Complex programs and their environments often
require simplification for evaluation purposes. This can lead to oversimplification and a
loss of nuance.
5. Creating a learning culture: Fostering an organizational culture that values evaluation
and uses findings to improve practice requires significant effort and change management.

Page | 6
6. Coping with political imperatives: Evaluation findings may conflict with political
agendas or vested interests, leading to resistance or manipulation of results.
7. Overcoming a lack of capacity: Inadequate resources, skills, and time can hinder effective
evaluation planning, implementation, and analysis.
8. Managing conflict: Evaluation can uncover sensitive issues or generate disagreements
among stakeholders, requiring effective conflict resolution skills.
9. Balancing rigor and relevance: Ensuring that evaluation findings are both
methodologically sound and useful for decision-making can be challenging.
10. Ethical considerations: Protecting participant privacy, ensuring data security, and
maintaining ethical standards throughout the evaluation process can be complex.

Page | 7

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy