Module 6 Program Monitoring and Evaluation NSTP
Module 6 Program Monitoring and Evaluation NSTP
Monitoring and
Evaluation
Members:
Marcaida, Carlo
Masilang, Lea
Mendez, Gracie
Mustasa, Jean Mae
Perea, Cherry
Learning Outcomes:
After completion of the module, the
students will be able to:
1.develop awareness on the importance
of monitoring and evaluation;
2.identify the steps and types of
monitoring and evaluation;
3.acquire concrete ideas in preparing a
modified monitoring and evaluation
system.
PEREA,
Monitoring and Evaluation (M&E) is used to
assess the performance of projects, institutions and
programmes set up by governments, international
organizations and NGOs. Its goal is to improve
current and future management of outputs,
outcomes and impact. This topic focuses on the
students’ understanding and learning the effective
and efficient process on the latter.
PEREA, CHERRY
TYPES OF MONITORING AND EVALUATION (M&E)
Assumption monitoring
Is crucial for a project to understand its
success or failure. It involves measuring external
factors that the project cannot control. For instance, a
project promoting contraceptive use may discover a
drop in contraceptive use due to increased importation
taxes, rather than project failure. This helps explain the
project's success or failure. Therefore, it's essential to
clearly outline working assumptions in the project log
frame.
Financial Monitoring
The process of comparing project
expenditure with budgets during the planning
stage, ensuring no excesses or wastages, and is
crucial for accountability, reporting, and measuring
financial efficiency, which involves maximizing
outputs with minimal inputs.
Impact Monitoring
A continuous assessment of the long-
term effects of project activities on the target
population. It is crucial for long-lived projects or
programs with no defined timelines to measure
impact change and assess the improvement of
beneficiaries' conditions. Managers monitor both
positive and negative impacts, intended and
unintended, through pre-determined indicators.
MUSTASA, JEAN MAE
TYPES OF EVALUATION
MENDEZ, GRACIE
Step 2: Define the evaluation
questions
Evaluation questions should be developed up-
front and in collaboration with the primary
audience(s) and other stakeholders who you intend
to report to. Evaluation questions go beyond
measurements to ask the higher order questions
such as whether the intervention is worth it or if it
could have been achieved in another way (see
examples below).
MENDEZ,
Step 3: Identify the monitoring questions
For example, for an evaluation question
pertaining to 'Learnings', such as "What worked and
what did not?" you may have several monitoring
questions such as "Did the workshops lead to
increased knowledge on energy efficiency in the
home?" or "Did the participants have any issues with
the training materials?".
The monitoring questions will ideally be
answered through the collection of quantitative and
qualitative data. It is important to not start collecting
data without thinking about the evaluation and
monitoring questions. This may lead to collecting data
just for the sake of collecting data (that provides no
relevant information to the programme).
Step 4: Identify the indicators and data
sources
In this step you identify what information is needed to
answer your monitoring questions and where this
information will come from (data sources). It is important to
consider data collection in terms of the type of data and
any types of research design. Data sources could be from
primary sources, like from participant themselves or from
secondary sources like existing literature. You can then
decide on the most appropriate method to collect the data
from each data source.
MARCAIDA, CARLO
Step 9: Integrate the M&E system
horizontally and vertically
Where possible, integrate the M&E system
horizontally (with other organizational systems and
processes) and vertically (with the needs and
requirements of other agencies). Simister, 2009
Try as much as possible to align the M&E
system with existing planning systems, reporting
systems, financial or administrative monitoring
systems, management information systems, human
resources systems or any other systems that might
influence (or be influenced by) the M&E system.
Step 10: Pilot and then roll-out the
system
Once everything is in place, the M&E system
may be first rolled out on a small scale, perhaps just
at the Country Office level. This will give the
opportunity for feedback and for the ‘kinks to be
ironed out’ before a full scale launch.
Staff at every levels be should be aware of the overall
purpose(s), general overview and the key focus areas
of the M&E system.
It is also good to inform persons on which areas
they are free to develop their own solutions and in
which areas they are not. People will need detailed
information and guidance in the areas of the system
where everyone is expected to do the same thing, or
carry out M&E work consistently.
This could include guides, training manuals,
mentoring approaches, staff exchanges, interactive
Final Thoughts
In conclusion, a good M&E system
should be robust enough to answer the
evaluation questions, promote learning and
satisfy accountability needs without being
so rigid and inflexible that it stifles the
emergence of unexpected (and surprising!)
results.