0% found this document useful (0 votes)
20 views

Module 6 Program Monitoring and Evaluation NSTP

NSTP-LTS

Uploaded by

Cherry Perea
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Module 6 Program Monitoring and Evaluation NSTP

NSTP-LTS

Uploaded by

Cherry Perea
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 22

Module 6: Program

Monitoring and
Evaluation

Members:
Marcaida, Carlo
Masilang, Lea
Mendez, Gracie
Mustasa, Jean Mae
Perea, Cherry
Learning Outcomes:
After completion of the module, the
students will be able to:
1.develop awareness on the importance
of monitoring and evaluation;
2.identify the steps and types of
monitoring and evaluation;
3.acquire concrete ideas in preparing a
modified monitoring and evaluation
system.

PEREA,
Monitoring and Evaluation (M&E) is used to
assess the performance of projects, institutions and
programmes set up by governments, international
organizations and NGOs. Its goal is to improve
current and future management of outputs,
outcomes and impact. This topic focuses on the
students’ understanding and learning the effective
and efficient process on the latter.

PEREA, CHERRY
TYPES OF MONITORING AND EVALUATION (M&E)

Monitoring is the systematic collection of data


during project implementation to assess progress
towards set objectives or goals. It occurs throughout
the project life cycle, with tools embedded into
activities for seamless process. There are various
types of monitoring in M&E, including process,
technical, assumption, financial, and impact
monitoring.

Process monitoring/ physical progress


monitoring is a method of collecting and analyzing
routine data to determine if project tasks and
activities are leading towards the intended results. It
authenticates progress by measuring inputs,
activities, and outputs, answering questions about
what has been done so far, where, when, and how.
Technical monitoring
A crucial process in project implementation
to evaluate the effectiveness of the strategy. It
involves assessing the technical aspects of the project,
such as the activities to be carried out. For instance, in
a safe water project, physical progress monitoring may
reveal low uptake of chlorination due to time
constraints, prompting a change in strategy to
household distribution of bottled chlorine.

Assumption monitoring
Is crucial for a project to understand its
success or failure. It involves measuring external
factors that the project cannot control. For instance, a
project promoting contraceptive use may discover a
drop in contraceptive use due to increased importation
taxes, rather than project failure. This helps explain the
project's success or failure. Therefore, it's essential to
clearly outline working assumptions in the project log
frame.
Financial Monitoring
The process of comparing project
expenditure with budgets during the planning
stage, ensuring no excesses or wastages, and is
crucial for accountability, reporting, and measuring
financial efficiency, which involves maximizing
outputs with minimal inputs.

Impact Monitoring
A continuous assessment of the long-
term effects of project activities on the target
population. It is crucial for long-lived projects or
programs with no defined timelines to measure
impact change and assess the improvement of
beneficiaries' conditions. Managers monitor both
positive and negative impacts, intended and
unintended, through pre-determined indicators.
MUSTASA, JEAN MAE
TYPES OF EVALUATION

1. Participatory Evaluation in which


representatives of agencies and stakeholders work
together in designing, carrying out and interpreting
an evaluation.

2. Process based an evaluation of the internal


dynamics of a project, its policy instruments, its
service delivery mechanisms, its management
practices, and the linkages among these.

3.Outcome based evaluation it facilitates the


asking if the organization is doing right activities to
bring about the expected outcomes.
MUSTASA, JEAN MAE
10 Steps To Design a Monitoring and
evaluation (M&E) System

Before launching into the steps, please note


that the development of a M&E system is a
participatory exercise. Staff at different levels of
the organisation who will be expected to maintain
or use the new M&E system should always be
consulted. This might include staff at head offices
or secretariats, staff in regional or country offices,
and staff at programme or project level.

MUSTASA, JEAN MAE


Step 1: Define the scope and purpose
This step involves identifying the evaluation
audience and the purpose of the M&E system. M&E
purposes include supporting management and
decision-making, learning, accountability and
stakeholder engagement.

'Be on the same page as the


‘evaluation audience''

MENDEZ, GRACIE
Step 2: Define the evaluation
questions
Evaluation questions should be developed up-
front and in collaboration with the primary
audience(s) and other stakeholders who you intend
to report to. Evaluation questions go beyond
measurements to ask the higher order questions
such as whether the intervention is worth it or if it
could have been achieved in another way (see
examples below).

MENDEZ,
Step 3: Identify the monitoring questions
For example, for an evaluation question
pertaining to 'Learnings', such as "What worked and
what did not?" you may have several monitoring
questions such as "Did the workshops lead to
increased knowledge on energy efficiency in the
home?" or "Did the participants have any issues with
the training materials?".
The monitoring questions will ideally be
answered through the collection of quantitative and
qualitative data. It is important to not start collecting
data without thinking about the evaluation and
monitoring questions. This may lead to collecting data
just for the sake of collecting data (that provides no
relevant information to the programme).
Step 4: Identify the indicators and data
sources
In this step you identify what information is needed to
answer your monitoring questions and where this
information will come from (data sources). It is important to
consider data collection in terms of the type of data and
any types of research design. Data sources could be from
primary sources, like from participant themselves or from
secondary sources like existing literature. You can then
decide on the most appropriate method to collect the data
from each data source.

“Data, data and more data”


Step 5: Identify who is responsible for
data collection, data storage,
reporting, budget and timelines
It is advisable to assign responsibility for the
data collection and reporting so that everyone is
clear of their roles and responsibilities.
Collection of monitoring data may occur
regularly over short intervals, or less regularly, such
as half-yearly or annually. Likewise the timing of
evaluations (internal and external) should be noted.
You may also want to note any requirements
that are needed to collect the data (staff, budget
etc.). It is advisable to have some idea of the cost
associated with monitoring, as you may have great
ideas to collect a lot of information, only to find out
that you cannot afford it all.
MASILANG, LEA
Additionally, it is good to determine how the
collected data will be stored. A centralised electronic
M&E database should be available for all project staff to
use. The M&E database options range from a simple
Excel file to the use of a comprehensive M&E software
such as LogAlto.
LogAlto is a user-friendly cloud-based M&E
software that stores all information related to the
programme such as the entire log frame (showing the
inputs, activities, outputs, outcomes) as well as the
quantitative and qualitative indicators with baseline,
target and milestone values. LogAlto also allows for the
generation of tables, scorecards, charts and maps.
Quarterly Progress reports can also be produced from
LogAlto.
Step 6: Identify who will evaluate the
data and how it will be reported
In most programmes there will be an internal and an
independent evaluation (conducted by an external
consultant).
For an evaluation to be used (and therefore useful) it is
important to present the findings in a format that is
appropriate to the audience. A 'Marketing and
Dissemination Strategy’ for the reporting of evaluation
results should be designed as part of the M&E system.

Have a strategy to prevent persons


from falling asleep during the
presentation of evaluation findings’
Step 7: Decide on standard forms and
procedures
Once the M&E system is designed there will be
a need for planning templates, designing or adapting
information collection and analysis tools, developing
organisational indicators, developing protocols or
methodologies for service-user participation,
designing report templates, developing protocols for
when and how evaluations and impact assessments
are carried out, developing learning mechanisms,
designing databases and the list goes on Simister,
2009.
Step 8: Use the information derived
from Steps 1- 7 above to fill in the
'M&E System ‘template
You can choose from any of the
templates presented in this article to
capture the information. Remember, they
are templates, not cast in stone. Feel free
to add extra columns or categories as you
see fit.

MARCAIDA, CARLO
Step 9: Integrate the M&E system
horizontally and vertically
Where possible, integrate the M&E system
horizontally (with other organizational systems and
processes) and vertically (with the needs and
requirements of other agencies). Simister, 2009
Try as much as possible to align the M&E
system with existing planning systems, reporting
systems, financial or administrative monitoring
systems, management information systems, human
resources systems or any other systems that might
influence (or be influenced by) the M&E system.
Step 10: Pilot and then roll-out the
system
Once everything is in place, the M&E system
may be first rolled out on a small scale, perhaps just
at the Country Office level. This will give the
opportunity for feedback and for the ‘kinks to be
ironed out’ before a full scale launch.
Staff at every levels be should be aware of the overall
purpose(s), general overview and the key focus areas
of the M&E system.
It is also good to inform persons on which areas
they are free to develop their own solutions and in
which areas they are not. People will need detailed
information and guidance in the areas of the system
where everyone is expected to do the same thing, or
carry out M&E work consistently.
This could include guides, training manuals,
mentoring approaches, staff exchanges, interactive
Final Thoughts
In conclusion, a good M&E system
should be robust enough to answer the
evaluation questions, promote learning and
satisfy accountability needs without being
so rigid and inflexible that it stifles the
emergence of unexpected (and surprising!)
results.

Kruno Karlovcec, a blogger made a valid


observation that the 10 steps should be
envisioned as a loop, with the last step
feeding back into Step 1. This is better than
a sequentially ordered process. A feedback
loop facilitates continuous development and
improvement. PEREA, CHERRY

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy