M&E Lecture Notes - Units 7-10
M&E Lecture Notes - Units 7-10
7.1 Introduction
After understanding what the indicators are and how they link to the whole monitoring and evaluation
process, we now need to look at how we can establish a monitoring and evaluation system. It is
important to understand that Monitoring and Evaluation is a vital rational integrated system within the
project that must be planned, managed and resourced.
You will recall from the module LDP 604 on project design and implementation that among the key
components of project cycle is project Monitoring and Evaluation. Monitoring and Evaluation
therefore, must be developed at the project planning stage.
66
It is therefore disastrous for project managers to view M&E as a statistical task or a tedious external
obligation of little relevance to those implementing projects.
From the previous lecture we have seen that it is hard to separate Monitoring from evaluation. Therefore
it is not wise to separate project monitoring functions from project evaluation functions such that high
– level impact – related assessments are subcontracted, while project staff focuses only on tracking
short term activities. This limits the opportunities to learn since short term activities forms part of the
long and high level impact of the project.
To ensure that the M&E provides integrated support to those involved in project implementation, the
project manager requires:
i. Create an M&E process that will lead to clear and regular learning for all those involved in
project strategy and operations.
ii. Understand the link between M&E and management functions
iii. Use existing processes of learning, communication and decision making among
stakeholders as the basis for project oriented M&E.
iv. Put in place the necessary conditions and capacities for M&E to be carried out.
In the above section we have seen how M&E forms an integral system which assists in project
implementation. In this section we are going to focus on how M&E links up to all the operations within
the project to satisfy the project objectives.
67
Source: IFAD (2002) setting up a M&E System; Managing for impact in Rural Development A guide
for Project M&E; https://edepot.wur.nl/288756
Figure 7.1 shows how the M&E fits in within the project. The figure focuses on the elements of M&E
and how it links with two components of the project: project strategies and operations.
i. Project Strategy: project strategy is considered as the plan for what will be achieved and how
it will be achieved. This forms a starting point for project implementation and setting up an
M&E system. The strategy is the basis for working out the project operations required to
implement activities efficiently and effectively.
ii. The completion of project activities leads to a series of actual outputs, outcomes and impacts.
Comparing the actual outputs, outcomes and impacts with what was planned in the project
strategy and understanding the differences in order to identify changes in the strategy and
operations is the core function of M&E system
iii. It is clear from figure 7.1 that M&E system consist of four interlinked parts.
a) The first parts consist of: Developing an M&E system. This is done by identifying the
information need to guide the project strategy, ensure effective operations and meet external
reporting requirements. There is need for one to decide on how to gather and analyze this
information and document a plan for M&E system. The process of working out on how to
monitor and evaluate a project, inevitably raises questions about project strategy itself, which
can help improve the initial design. Setting up an M&E system with a participatory approach
builds stakeholders’ understanding about the project and starts creating a learning
environment.
b) The second part is Gathering management information. This is regarded as the
implementation of the M&E system. Information can be gathered through informal as well
as structured approaches. Information comes from tracking which outputs, outcomes and
impacts are being achieved and checking project operations (e.g. activity completions,
financial management and resource use). After you start on gathering information and
management, you will have the capacity to solve some problems or you will have a lot of
ideas that may lead to revising the initial M&E plan.
c) The third part of the M&E is that of involving project stakeholders on a critical reflection
process for improvement of the activity. Once the information has been collected it needs to
be analyzed and discussed by the project stakeholders. This may happen formally – for
example during an annual project review workshop. Or it may happen informally – for
example talk to project beneficiaries about the project during field visits. In these reflections
and discussions, you will probably notice information gaps. This can trigger adjustment on
the M&E plan to ensure the necessary information is being collected.
d) The fourth part of the M&E system is the communication of M&E results to the people who
need to use it. This is the part that determines the success of the M&E system. The part
includes reporting to the funding agencies but it is broader. For example, problems
68
experienced by field staff need to be understood by project managers. Project progress and
problems must be shared with project participants to enable you to find solutions together.
Reports to funding agencies need to balance the success and mistakes, and above all, be
analytical and action-oriented. Some of those who are to use the information may have been
involved in collecting data or analysing part of it. However you need to plan on how to inform
those who were not involved.
iv. The results from M&E must improve the project strategy and operations. The senior
management with the support of project staff is responsible for this. Sometime the improvement
may be immediate depending on the availability of resources and sometimes improvement may
require negotiation between key project stakeholders. Or there may be need to change the
sequence of certain activities and thus require time to be effected.
The readiness assessment is a diagnostic tool that can be used to determine whether the prerequisites
are in place for building an M&E system. The following factors must be considered before setting up
an M&E system.
It is important to know where the demand for creating an M&E system is emanating from and why. Are
the demands and pressures coming from internal, multilateral, or international stakeholders, or some
combination of all these? These requests will need to be acknowledged and addressed if the response
is to be appropriate to the demand. To some extent internal demands may arise from calls for reforms
in public sector governance and for better accountability and transparency. Anti-corruption campaigns
may be a motivating force. Externally, pressures may arise from the donor community for tangible
development results for their investments. International organizations investing in development
projects, such as the European Union, expect a feedback system on public sector performance via M&E
for each of the accession countries. The competitive pressures of globalization may come into play, and
the rule of law, a strong governance system, and clearly articulated rules of the game are now necessary
to attract foreign investment. Financial capital and the private sector are looking for a stable, transparent
investment climate, and protection of their property and patents, before committing to invest in a
country. There are multitudes of pressures that project management may need to respond to, and these
will drive the incentives for building a results-based M&E system.
Champions in an organization implementing projects are critical to the sustainability and success of an
M&E system. Within a given organization implementing projects, there are individuals or groups who
will likely welcome and champion such an initiative, while others may oppose or even actively counter
the initiative. It is important to know who the champions are and where they are located in the
organization. Their support and advocacy will be crucial to the potential success and sustainability of
the M&E system. However, if the emerging champion is located away from the center of policymaking
69
and has little influence with key decision makers in that particular organization, it will be difficult,
although not impossible, to envision an M&E system being used and trusted. It will be hard to ensure
the viability of the system under these circumstances.
Viability is dependent upon the information being viewed as relevant, trustworthy, useable, and timely.
M&E systems with marginally placed champions who are peripheral to the decision making process
will have a more difficult time meeting these viability requirements. Information from assessment on
the champions of M&E system will help the Project manager together with the stakeholders come up
with the roles and responsibilities that must be stated prior to the development of the system. Clearly
identify those who Will be involved in the design, implementation and reporting and allocate them such
responsibilities. This will ensure that there is staff for the supervision of the system by assigning the
responsibilities and roles. It will be clear as to who will do what.
Frequently, a careful institutional assessment should be made to assess the real capacity of the users to
actually create, utilize, and sustain the system. A carefully done readiness assessment helps provide a
good understanding of how to design the system to be responsive to the information needs of its users,
determine the resources available to build and sustain the system, and assess the capacities of those who
will both produce and use the information. Understanding these issues helps to tailor the system to the
right level of complexity and completeness. For a results-based M&E system to be effectively used, it
should provide accessible, understandable, relevant, and timely information and data. These criteria
drive the need for a careful readiness assessment prior to designing the system, particularly with
reference to such factors as ownership of the system, and benefits and utility to key stakeholders. From
a technical perspective, issues to be addressed include the capacity of the organization to collect,
analyze and interpret the data, produce reports, manage and maintain the M&E system, and use the
information produced. Thus, the readiness assessment will provide important information and baseline
data against which capacity-building activities—if necessary— can be designed and implemented.
Furthermore, there is an absolute requirement to collect no more information than is required. Time and
again, M&E systems are designed and are immediately overtaxed by too much data collected too
often—without sufficient thought and foresight into how and whether such data will actually be used.
Monitoring and evaluation is not an end unto itself. It is a tool to be used to promote good governance,
modern management practices, innovation and reforms, and better accountability. When used properly,
these systems can produce information that is trustworthy, transparent, and relevant. M&E systems can
help policymakers track and improve the outcomes and impacts of resource allocations. Most of all,
they help organizations make better informed decisions and policies by providing continuous feedback
on results. Experience shows that the creation of a results-based M&E system often works best when
linked with other public sector reform programs and initiatives, such as creating a medium-term public
70
expenditure framework, restructuring public administration, or constructing a National Poverty
Reduction Strategy. Linking the creation of M&E systems to such initiatives creates interdependencies
and reinforcements that are crucial to the overall sustainability of the systems. The readiness assessment
can provide a road map for determining whether such links are structurally and politically possible.
The basic concern of the project manager is to assess whether there are any organizational units or
departments, or individuals within the organization that already have monitoring and evaluation
capacity and that can undertake evaluations. An effective and monitoring system should have competent
staff to manage and oversee the system. To ensure that the system works effectively consider developing
the capacity of the people selected to manage it. It is important to assess organization’s capacity to
monitor and evaluate. As part of the preparation an appropriate organization structure should be
identified. This would provide the management team the authority to determine the course of the system
and to avoid the confusion on whose authority the system is working. The project manager needs also
to scout capacities from outside the organization, such as NGOs, universities, research institutes, and
training centers that may provide part of the necessary technical capacity to support the organizations’
M&E system if there be any need. It is important for the project manager to assess the following as they
manifest in the project: Technical skills; Managerial skills; Existing data systems and their quality;
Technology available; Fiscal resources available; Institutional experience
71
7.6 Models for Setting up an Effective M&E System
Although experts differ on the specific sequence of steps in building a results-based M&E system, they
all agree on the overall intention of the system. For example, different experts propose four or seven-
step models. Regardless of the number of steps, the essential actions involved in building an M&E
system are to:
You will notice that all the above will feature to a large extent in the two models that we are going to
discuss below.
Other experts urge that six key steps should be considered while setting up an effective M&E system.
These include:
i. Establishing the purpose and the scope on M&E system. Under this one should ask, why do
we need M&E and how comprehensive should be our M&E system?
ii. Identifying performance questions, information needs and indicators. The question to be
raised is what do we need to know to monitor and evaluate the project in order to manage it
well?
iii. Planning information gathering and organization – how will the required information be
gathered and organized?
iv. Planning critical reflection processes and events – how will we make sense of the information
gathered and use it to make improvements?
v. Planning for quality communication and reporting – how and to whom do we want to
communicate what in terms of our project activities and processes?
vi. Planning for the necessary conditions and capacities – what is needed to ensure that our M&E
system actually works?
Although these factors have been extensively examined by the Kuzek and Rist (2004) model, there is
need to look at them keenly. It is imperative to note that a good project appraisal report will include an
indicative M&E framework that provides enough details about the above questions to enable budgeting
and allocation of technical expertise, giving funding agencies an overview of how M&E will be
72
undertaken, and guide project and partner staff during project start-up phase. Let briefly focus on what
each step entails.
a) Purpose and scope of M&E system: Clear definition of the purpose and scope of the intended
M&E system helps when deciding on issues such as budget levels, number of indicators to track,
type of communication needed. Specifying the purpose also helps to make clear what can be
expected of the M&E system as it forces you to think about the nature of the project and the
implications for information needed to manage it well.
b) Performance questions, information needs and indicators: it may be difficult for a project
manager to list quantitative indicators directly from the project objectives in the log frame
matrix. This is because some objectives are so complex to the extent that they cannot be
summarized in terms of one or a few indicators. Also, while it might be possible for quantitative
information to be found that show if objectives are being met, it does not necessarily explain
why and if this can be attributed to the project, therefore multiple source of quantitative and
qualitative information are critical to explain what is happening and look closely at the
relationship between different pieces of information, rather than single indicator.
Working with performance questions to guide indicator analysis will give you a more integrated
and meaningful picture of objective achievements. Answering these questions requires
descriptive analysis and qualitative information. Starting by identifying performance questions
makes it easier to recognize which specific indicators are really necessary. Sometimes a
performance question may be answered directly with a simple quantitative indicator. However,
very often the question can only be answered by a range of quantitative and qualitative
information.
Table 7.1 Tasks needed when detailing the M&E plan based on a project appraisal report
M&E Design Steps Outputs in Project Tasks During Project Start-up to Develop a
Appraisal Detailed M&E system
(M&E Framework)
1. Establish the Broadly define purpose § Review purpose and scope with key stakeholders
purpose and and scope of M&E in
scope project context
2. Identify List of indicative key § Assess the information needs and interest of all
performance questions and indicators key stakeholders
questions, for the goal, purpose § Precisely define all questions, indicators and
indicators and and output levels information needs for all levels of objective
information needs hierarchy
73
§ Check each bit of information for relevance and
end-use
4. Plan for Broad description of § Make a precise list of all key audiences, what
communicating key audiences and type information they need, when they need it and in
and Reporting of information that what format.
should be § Define what is to be done with the information –
communicated to them simply send it, provide a discussion for analysis,
to enable resource seek relevance feedback for verification e.t.c
allocation § Make a comprehensive schedule for information
production, showing who is to do what by when
in order to have information needy on time
6. Plan for the Indicative staff levels § Come to precise description of: the number of
necessary and types, clear M&E staff, their responsibilities and linkages,
conditions and description of incentive needed to make M&E work,
capacities organization structure organizational relationships between key M&E
of M&E, indicative stakeholders, the type of information
budget management system to be established and
detailed budget
74
Kuzek and Rist (2004) argues that the above model for setting up an M& E systems is defective in that
it ignores some key factors that makes the system not impressive to the key project implementers. He
adds that the models do not cater for organizational, political and cultural factors. He proposes a 10
Step model which differs from others because it provides extensive details on how to build, maintain—
and perhaps most importantly—sustain a results based M&E system. There 10-steps proposed by Kuzek
and Rist (2004) are briefly discussed below:
Step 1: Conducting a readiness assessment: This model differs from other approaches in that it
contains a unique readiness assessment. Such an assessment must be conducted before the actual
establishment of a system. The readiness assessment is, in essence, the foundation of the M&E system.
Just as a building must begin with a foundation, constructing an M&E system must begin with the
foundation of a readiness assessment. Without an understanding of the foundation, moving forward
may be fraught with difficulties and, ultimately, failure. Readiness Assessment can be considered as an
analytical framework to assess the project’s capacity and willingness to monitor and evaluate its
project’s goals.
Step 2: Agreeing on Outcomes to Monitor and Evaluate: This stage will be described in detail in
lecture eight. However what we need to know is that the outcomes help the project stakeholders ‘Know
where you are going before you get moving”. It is important that all project stakeholders agree on what
outcomes to monitor and evaluate. Clearly defined outcomes provide a foundation for designing and
building sustainable M&E system. They also help in Budgeting for outputs, and general management
of the outcomes. The outcomes are usually not directly measured they are only reported on. At some
level outcomes must be translated to a set of key indicators.
Step 3: Selecting Key Indicators to Monitor Outcomes: You will recall that in lecture six we discussed
indicators as a specific measure, that when tracked systematically over time indicate progress (or not)
toward a specific target. We also discussed the importance of an indicator, types of indicator,
characteristics of good indicators, and steps a project manager can explore in selecting SMART
indicators. It is important to note that selecting of indicators is a key step in developing an M&E system.
All indicators emanate from outcomes agreed upon by all the project stakeholders. The most compelling
question to ask yourself when selecting key indicators is, how will we know success when we see it?
Step 4: Baseline Data on Indicators: Step 4 of the model relates to establishing performance
baselines—qualitative or quantitative— that can be used at the beginning of the monitoring period. The
performance baselines establish a starting point from which to later monitor and evaluate results. The
baseline provides a measurement ‘to find out where we are today’. This stage will be discussed in details
in lecture eight. Other steps as suggested by Kuzek and Rist (2004) include;
Step 5: which builds on the previous steps and involves the selection of results targets, that is, interim
steps on the way to a longer-term outcome. Targets can be selected by examining baseline indicator
levels and desired levels of improvement.
75
Step 6: of the model, includes both implementation and results monitoring. Monitoring for results
entails collecting quality performance data, for which guidelines are given;
Step 7: deals with the uses, types, and timing of evaluation. Reporting findings;
Step 8: looks at ways of analyzing and reporting data to help decision makers make the necessary
improvements in projects, policies, and programs;
Step 9: which talks more on using M&E findings and emphasizes the importance of generating and
sharing knowledge and learning within the organizations; and ,Finally,
Step 10: covers the challenges in sustaining results-based M&E systems including demand, clear roles
and responsibilities, trustworthy and credible information, accountability, capacity, and appropriate
incentives.
76
UNIT EIGHT
8.1 Introduction
In lecture seven we discussed the basic models for setting up M&E system. During this discussion we
realized that the decision on outcomes and setting of targets was key in the build-up of M&E system.
Despite the fact that indicators were discussed earlier in this lecture, it is important for you to understand
that one cannot set indicators before determining outcomes. This is because it is the outcomes—not the
indicators—that will ultimately produce the project benefits. In this lecture we are going to discuss
outcomes and baseline targets as foundations of measuring of project performance indicators.
Outcomes will demonstrate whether success has been achieved. In short, outcomes will show which
road to take. Setting outcomes is essential in building a results-based M&E system. Building the system
is basically a deductive process in which inputs, activities, and outputs are all derived and flow from
the setting of outcomes. Before discussing the process of setting up project M&E outcomes it is
important to look at the factors that a project manager should consider while choosing which outcome
to monitor.
There are several factors that a project manager should consider while choosing the outcome to monitor
and evaluate. Some of the factors include:
i. Goals of the projects and Existing priorities: What are the strategic priorities? What are
the desired outcomes? These are the questions that every organization, every level of
government, and the interested parties in civil society should be asking themselves and
others. We focus primarily on how this relates to the national government.
Every country has finite budgetary resources and must set priorities. Consequently, it is
important to keep the following distinction in mind: One budgets to outputs and manages to
outcomes. There are many issues to consider in choosing outcomes to monitor and evaluate.
77
For example, outcomes could be linked to international economic development and lending
issues, including a National Poverty Reduction Strategy, a National Development Plan, and
even Millennium Development Goals. At the country level, there could already be some
stated national, regional, or sectoral goals. Also, political and electoral promises may have
already been made that specify improved governmental performance in a given area. In
addition, there may be citizen polling data indicating particular societal concerns.
Parliamentary actions and authorizing legislation are other areas that should be examined in
determining desired national goals. There may also be a set of simple goals for a given project
or program, or for a particular region of a country. From these goals, specific desired
outcomes can be determined. It should be noted that developing countries may face special
challenges in formulating national outcomes.
ii. Stakeholder Interest: When setting outcome it is important to capture the stakeholders’
interest. it is important to note that the projects outcomes target to fulfil felt needs of the
society/organizations. In order to capture the stakeholders’ interests there is need to launch a
participatory process involving key stakeholders in the formulation of the outcomes.
iii. Available Capacity: Available capacity in terms of finances and other resources such as
human resource and technological capacity are important factors that should be considered
while formulation of the project outcomes. A project performance is only realized in an
environment where adequate resources interact in an effective and efficient way to achieve
the desired outcome. It will be needless to formulate outcomes that will never be realized due
to lack of capacity.
After looking at factors that you need to consider when choosing outcome to monitor, let’s now discuss
the process of setting and agreeing upon outcome to monitor. In order to jump start the process of setting
the outcome to monitor, you need to know where you are going, why you are going there, and how you
will know when you get there. There is a political process involved in setting and agreeing upon desired
outcomes. Each part is critical to the success of achieving stakeholder consensus with respect to
outcomes. The following are the steps involved in setting and agreeing upon outcome to monitor;
i. Identify Specific Stakeholder Representatives: Who are the key parties involved around
an issue area (health, education, and so forth)? How are they categorized, for example, NGO,
Government, donor? Whose interests and views are to be given priority?
ii. Identify Major Concerns of Stakeholder Groups: Use information gathering techniques
such as brainstorming, focus groups, surveys, and interviews to discover the interests of the
involved groups. Numerous voices must be heard—not just the loudest, richest, or most
well-connected. People must be brought into the process to enhance and support a
democratic public sector.
78
iii. Translate Problems into Statements of Possible Outcome Improvements: It should be
noted that formulating problems as positive outcomes is quite different from a simple
reiteration of the problem. An outcome oriented statement enables one to identify the road
and destination ahead. We encourage outcomes to be framed positively rather than
negatively. Stakeholders will respond and rally better to positive statements, for example,
“We want improved health for infants and children,” rather than “We want fewer infants and
children to become ill.” Positive statements to which stakeholders can aspire seem to carry
more legitimacy. It is easier to gather a political consensus by speaking positively to the
desired outcomes of stakeholders.
Figure 8.1 shows an example of formulating various concern identified by stakeholders into positive
and desired outcomes.
From the figure above the problem is translated in desired outcomes. However, there is need to
disaggregate the positive statement by considering the following questions: For whom?; Where? How
much? By when? If we take an example from figure 8.1; ‘increase employment opportunities for youth
in rural areas’ we can disaggregate this outcome to “increase employment among youth in the rural
sector by 20 percent over the next four years.” It is only through disaggregating the outcome and
79
articulating the details that we will know if we have successfully achieved it. Simplifying and distilling
outcomes at this point also eliminates complications that may arise later when we start to build a system
of indicators, baselines, and targets by which to monitor and evaluate. By disaggregating outcomes into
subcomponents, we can set indicators to measure results.
After focusing on the process of selecting key performance indicators to monitor outcomes, we now
need to examine the next level of the foundations of measuring the project indicators, and that is; Setting
baseline and gathering data on indicators. Establishment of baseline data - establishing where we are at
present relative to the outcome we are trying to achieve. One cannot determine project Performance in
the future (set targets) without first establishing a baseline. The baseline is the first measurement of an
indicator. It sets the current condition against which future change can be tracked. For instance, it helps
to inform decision-makers about current circumstances before embarking on projecting targets for a
given program, policy, or project. In this way, the baseline is used to learn about current or recent levels
and patterns of performance. Importantly, baselines provide the evidence by which decision-makers are
able to measure subsequent policy, program, or project performance.
Establishing baselines is the third part of the performance framework. Baselines are derived from
outcomes and indicators. A performance baseline is information—qualitative or quantitative— that
provides data at the beginning of, or just prior to, the monitoring period. The baseline is used as a
starting point, or guide, by which to monitor future performance. Baselines are the first critical
measurement of the indicators. Figure 8.2 contains an example of baseline data for an Education project:
80
There are eight key questions that should be asked in building baseline information for every indicator.
(These questions continue to apply in subsequent efforts to measure the indicator.)
A target is can be defined as a specified objective that indicates the number, timing and location of that
which is to be realized (IFAD, 2002). In essence, targets are the quantifiable levels of the indicators that
a country, society, or organization wants to achieve by a given time. For example, one target might be
“all families should be able to eat two meals a day, every day, by the year 2005.” One method of
establish targets is to start with the baseline indicator level, and include the desired level of improvement
(taking into consideration available resources over a specific time period, for example, 24–36 months),
to arrive at the performance target. In so doing, the starting point will be known, as will the available
resources to make progress toward that target over a particular period of time. This will give the target
performance.
There are a number of important factors to consider when selecting performance indicator targets. First
and foremost one needs to take baselines seriously. There must be a clear understanding of the baseline
starting point; for example, an average of the last three years’ performance, last year’s performance,
average trend, data over the past six months, and so forth. In other words, previous performance should
be considered in projecting new performance targets. One might observe how an organization or policy
has performed over the previous few years before projecting future performance targets.
Another consideration in setting targets is the expected funding and resource levels—existing capacity,
budgets, personnel, funding resources, facilities, and the like—throughout the target period. This can
include internal funding sources as well as external funding from bilateral and multilateral donors.
Targets should be feasible given all of the resource considerations as well as organizational capacity to
deliver activities and outputs. Most targets are set annually, but some could be set quarterly. Others
could be set for longer periods. However, setting targets more than three to four years forward is not
advisable. There are too many unknowns and risks with respect to resources and inputs to try to project
target performance beyond three to four years. In short, be realistic when setting targets.
81
The political nature of the process also comes into play. Political concerns are important. What has the
government or administration promised to deliver? Citizens have voted for a particular government
based on articulated priorities and policies that need to be recognized and legitimized in the political
process. Setting targets is part of this political process, and there will be political ramifications for either
meeting or not meeting targets. Setting realistic targets involves the recognition that most desired
outcomes are longer term, complex, and not quickly achieved. Thus, there is a need to establish targets
as short-term objectives on the path to achieving an outcome. So how does an organization or country
set longer-term, strategic goals to be met perhaps 10 to 15 years in the future, when the amount of
resources and inputs cannot be known? Most governments and organizations cannot reliably predict
what their resource base and inputs will be 10 to 15 years ahead. The answer is to set interim targets
over shorter periods of time when inputs can be better known or estimated. “Between the baseline and
the . . . [outcome] there may be several milestones [interim targets] that correspond to expected
performance at periodic intervals” (UNDP 2002, p. 66). For example, the Millennium Development
Goals ( MDGs) have a 15-year time span. While these long-term goals are certainly relevant, the way
to reach them is to set targets for what can reasonably be accomplished over a set of upto four-year
periods.
The aim is to align strategies, means, and inputs to track progress toward the MDGs over shorter periods
of time with a set of sequential targets. Targets could be sequenced: target one could be for years one
to three; target two could be for years four to seven, and so on.
Flexibility is important in setting targets because internal or external resources may be cut or otherwise
diminished during budgetary cycles. Reorientation of the program, retraining of staff, and
reprioritization of the work may be required. This is an essential aspect of public management.
If the indicator is new, be careful about setting firm targets. It might be preferable to use a range instead.
A target does not have to be a single numerical value. In some cases it can be a range. For example, in
2003, one might set an education target that states “by 2007, 80 to 85 percent of all students who
graduate from secondary school will be computer literate.” It takes time to observe the effects of
improvements, so be realistic when setting targets. Many development and sector policies and programs
will take time to come to fruition. For example, environmental reforestation is not something that can
be accomplished in one to two years.
Finally, it is also important to be aware of the political games that are sometimes played when setting
targets. For example, an organization may set targets so modest or easily achieved that they will surely
be met. Another game that is often played in bureaucracies is to move the target as needed to fit the
performance goal. Moving targets causes problems because indicator trends can no longer be discerned
and measured. In other cases, targets may be chosen because they are not politically sensitive.
82
UNIT NINE
9.1 Introduction
Credibility of any evaluation is measured against standards of quality established by the International
Community of Evaluators. This lecture will introduce you to commonly agreed standards that you can
apply while planning for evaluation up to the implementation stage. The lecture covers; utility
standards, feasibility standards, propriety standards and accuracy standards.
Before you plan to does an evaluation make sure you have the information needs of the project users.
It is important to ask yourself the following questions:
Once you can answer the above questions successfully, continue reading the brief note on the various
standards that you should consider when planning an evaluation process.
A sound and fair evaluation should address the interests and needs of those involved or affected by the
evaluation. For example the planners, designers, implementers of the project, target group, development
partners, decision makers, evaluators and the general public.
83
9.3.2 Credibility of the evaluator
Evaluation can only achieve maximum credibility and acceptance if it was carried out by an evaluator
who is trust worthy, who displace high degree of integrity, professionally competent, able to give
independent judgment, good communicator and sociable.
The information surrounding the evaluation should be carefully selected. Information collected should
be responsive to the interest and needs of the stakeholders. Therefore it is important to gather enough
information that would answer all pertinent questions about the project
Describe all the procedures used in data collection and interpretation carefully so that the bases for
value judgment are clear
Describe clearly the project that is being Take Note: All stages of evaluation should
describe; context, purpose, questions, procedures,
evaluated. Give simple definitions that can funding etc.
be understood by the intended user.
All reports should be presented in good time Take Note: Interim and final reports are equally
important for they may have an impact on the future
for use by the intended groups.
action of the target group. Reporting is important.
9.3.7 Evaluation impact
In order to increase evaluation impact on the users, involve the stakeholders at different stages of
evaluation
You have to ensure that feasibility standards are observed to check on the manner in which evaluation
was carried out. The major question to ask yourself is, was the evaluation realistic, cost effective,
prudent, thoughtful e.t.c
84
9.4.1 Practical procedures
Under this standard it is important to ensure that the evaluation process embraces practical methods and
instruments. If this is adhered to the evaluation process will guarantee production of the needed
information and disruption will be minimized. To validate the evaluation methods and instruments there
is need to involve all the stakeholders.
When planning and conducting an evaluation, it is important to take into account the interest of all
interested groups. This ensures maximum cooperation that guarantees smooth running of the activity
and generation of unbiased evaluation results.
A sound and fair evaluation is bound by legal and ethical standards. The evaluation process should be
hinged on sound ethical consideration to ensure the welfare of the stakeholders and participants. The
rights of the participants and beneficiaries should be respected by the methodology and procedures of
the evaluation process. The following standards should be considered.
Avoid gentleman agreements’ and insist on putting all agreement in writing. This will bind the formal
parties so that they are obligated to adhere to all conditions of the agreement or to renegotiate it. When
preparing formal agreements ensure that you spell out clearly what is to be done, how it will be done,
by who and when. Budgets, time, personnel, design, methodology, and report content are also regulated
in formal writing
An evaluation design and methodology should respect and protect the rights and the welfare of the
stakeholders and participants. For instance the instrument should at all cost avoid items that will cause
physical, physiological and even psychological damage to the participants. On the other hand the
evaluation findings that are perceived to have a negative effect to the beneficiaries must be justified
beyond any reasonable doubt before dissemination is done.
Evaluation should respect human dignity and worth in the interaction with other persons associated
with an evaluation so that participants are not threatened or harmed. It is therefore necessary to be
familiar with the cultural practices of the intended group i.e. beliefs customs manners e.t.c
85
9.5.4 Complete and fair assessment
The evaluation should be complete and fair in its examination and recording of strengths and weakness
of the programme being evaluated so that strengths can be built upon and weakness can be addressed.
The evaluation exercise depends on the methodology which is constrained with available resources in
terms of time and budget. These are factors that ensure the exercise is complete and fair. Therefore any
issues that may cause difficult in the process of evaluation should be discussed and agreed upon before
the exercise.
The formal parties to an evaluation should ensure that the entire findings of the evaluation along with
pertinent limitations are made accessible to the persons affected by the evaluation and any others with
express legal rights to receive the findings.
As a project manager, you need to deal with conflict of interest openly and honestly so that it does not
compromise the evaluation process and results.
The integrity of the evaluation cannot be compromised just to accommodate conflicts of interest
The evaluators allocating and expenditure of resources should reflect sound accountability procedures.
The evaluators should be prudent and ethically responsible in management of the resource assigned to
the evaluation exercise to the satisfaction of the stakeholders
The accuracy standards are intended to ensure that an evaluation will reveal and convey technically
adequate information about the features that determine the value of the programme being evaluated.
The standards involved include the following:
Describe and document clearly and accurately the project being evaluated. The description should be
sufficiently detailed to ensure an understanding of programme aims and strategies.
The context in which the evaluation exists should be examined in enough detail so that its likely
influence on the programme can be identified. This will help in the accurate interpretation of the
evaluation findings and assessing the extent to which they can be generalized.
86
9.6.3 Described purposes and procedures
The purposes and procedure of the evaluation should be monitored and described in enough details so
that they can be identified and assessed. The purpose of describing and clarifying the purpose and
procedures of the evaluation is to help the evaluator focus on issues that are of greatest concern to
stakeholders. This ensures that time and resources are used as efficiently and effectively as possible.
The sources of information used in programme evaluation should be described in enough details so that
their adequacy can be assessed. The criteria used for selecting sources should be stated clearly so that
users and other stakeholders can interpret the information accurately and assess if it might be biased.
The information gathering procedures implemented should provide assurance that the interpretation
arrived at is valid and reliable. Validity and reliability can be see as the extent to which methodologies
and instrument measure what they are intended to measure and can produce the same results if
repeatedly applied.
The information collected analyzed and reported in an evaluation should be systematically reviewed
and any error found should be corrected.
The information collected should be processed and analyzed in a systematic way so that the evaluation
questions can be effectively answered.
87
UNIT TEN
10.1 Introduction
After exposing yourself to different aspects of evaluation, the next task as an evaluator is to design an
evaluation exercise. At this juncture you will be required to have knowledge of various methodology
and tools that you will use to conduct the evaluation. In this lecture we are going to discuss, key
monitoring and evaluation methods, factors to consider when selecting the monitoring and evaluation
tools and preparation of monitoring and evaluation document (proposal).
Evaluation often produces controversial results. These might be criticized, especially in terms of
whether the data collection method, analysis and results lead to reliable information and conclusions
that reflect the situation.
Methods of data collection have strengths and draw backs. Formal methods (surveys, participatory
observation, direct measurements etc) used in academic research would lead to qualitative and
quantitative data that have a high degree of reliability and validity. The problem is that they are
expensive. Less formal methods (field visits, unstructured interviews etc) might generate rich
information but less precise conclusions, especially because some of those methods depend on
subjective views and intuitions.
There are two main evaluation approaches that an evaluator can consider when designing an evaluation
method. Some authors call them evaluation paradigms. These are quantitative and qualitative. Let us
discuss each of the approaches mentioned above:
Quantitative approach of evaluation is a school of thought that certain groups of evaluators view as the
best for evaluating projects. They believe that evaluations that peruse this kind of approach produce
valid and reliable evaluation results. Quantitative approach is a systematic collection, analysis and
interpretation of numerical data for the purpose of explaining, predicting or controlling evaluation
phenomena. Some of the widely used designs in this approach include: survey designs, cross-sectional
design, longitudinal design, ex-post facto design, experimental design, and quasi-experimental design.
We know that you have covered this design in the Research Method module.
88
For the purpose of reminding ourselves, we will briefly discus each of the above mentioned designs as
below:
This designs attempts to systematically collect, analyze and interpret numerical data from members of
a project stakeholders in order to determine the current status of project target population, with respect
to one or more variables. There are two types of survey designs; sample surveys and census surveys.
Data is normally analyzed using frequencies, percentages, means, standard deviations, ANOVA, and
Chi-square
10.3.1.2. Cross –Section Design: In this design, subjects at various stages of development are
simultaneously studied. Suppose for example an evaluation is interested in the level of effects of a
project that is implemented at different phases: the result from each phase would be different from the
other. Therefore the overall project results can be attained by evaluating each phase and making
conclusions.
10.3.1.3. Longitudinal design: In this design the evaluator studies the same project (target population)
over a period of time. Using the above example the evaluator will evaluate the effects of a programme
at each phase and studies the effects of the project as it progresses from one phase to the other.
10.3.1.4. Correlation studies: This involves collecting data in order to determine whether and to what
degree the relationship exists between two or more quantifiable variables
10.3.1.5. Ex-Post Facto Design (Casual Comparative): It attempts to determine the cause of reasons
for existing differences in the status or behavour of different groups of individuals. The design is ex-
post facto design because the evaluator attempts to identify the major factor which has led to a
difference in two groups of individuals after both the effect and the alleged cause have already occurred
and are studied by the evaluator in retrospect.
10.3.1.6. Experimental design: In an experimental study, the evaluator manipulates at least one
independent variable, controls other variables and observes the effects on one or more dependent
variables (manipulation of independent variables involves determining which group of subjects get
which treatment). Independent variables typically manipulated may include types of inputs exposed to
project target groups or even the kind of services rendered to the groups by the project. The experimental
evaluation involves two groups, an experimental group and a control group. The experimental group
receives a new novel treatment, while the control either receives a different treatment or the usual
treatment. The control group is needed for comparison purpose.
The experimental design can be in form of true-experimental design, factorial design, or even quasi-
experimental design.
89
10.3.2 Qualitative Approach
This is another school of thought that a section of evaluators view as the best for evaluating projects.
They believe that evaluations that peruse this kind of approach produce valid and reliable evaluation
results. Qualitative approach is a systematic collection, analysis and interpretation of narrative data for
the purpose of explaining, and gaining insight and understanding of evaluation phenomena.It provides
descriptions and accounts of social events and object of an evaluation in its natural setting. This implies
that an evaluator using this approach to evaluate projects, should look at it in view of:
§ Field work: referring to the mode of data collection that is in the field rather than in the
laboratory
§ Naturalistic – meaning that data should be collected where the events take place
§ Ethnographic – implying that the evaluation should be descriptive in nature. Culture of the
project beneficiaries should also be described and necessary linkages made
§ Symbolic interactions – this holds that, the ‘human experience is mediated by
§ interpretations’ implying that meaning given to objects, events or situations are interpretations
from the responsible individuals given the occasions and environment. Thus definitions and
settings of the occurrences are important to figure out the meanings and the process behind
them.
§ Phenomenological – this points to the subject matter to be investigated. The evaluator should
try to understand how people go about describing the behaviour in their respective world. The
evaluator should exercise a high degree of common sense during data collection process. This
is because micro- issues of data collection are of importance and can influence the direction of
an evaluation results.
§ Orientational – the evaluation should begin with a presumed general pattern (theoretical or
ideological perspective) which is then described and explained by the data gathered (Patton,
1990). Thus the evaluator goes into the field knowing the most important variables and concepts
that will direct the focus and interpretation of the evaluation findings.
Some of widely used designs under this approach include case study, ethnography, phenomenology,
biography, grounded theory etc
10.3.2.1. Case study design: This design attempt to examine an individual or unit in depth as an
endeavour to describe the behaviours or events and the relationships of these behaviours or events to
the subject’s history and environment. The emphasis of the design is to understand why an individual
does what he/she does and how behaviour changes as the individual responds to the environment. Case
study design aims at a comprehensive, systematic and in-depth gathering of information about a case
of interest. In such a study raw case data is assembled, a case record is constructed and ultimately a
case study narrative is produced. Types of Case studies include:
§ Historical Organizational case study – this is a study that traces the historical development of
an organization/project over time. It relies on document review and interviews
90
§ Observational case study – this is mostly used to study the interaction of group of people over
a period of time. Their major data collection technique is participant observation.
§ Situation analysis – in this form of case study a particular event is studied from the view point
of all major participants. The collective views of the participants are synthesized by the
evaluator to provide an understanding of the subject under study.
10.3.2.2. Ethnography Design: This design can be defined as a study of human societies, institutions
and social relationships by getting ‘inside them’. In this case the evaluator accesses the social world of
people or group being studied for the purpose of tying to understand their ‘shared meaning’ and ‘taken-
for-granted assumptions’. The purpose of ethnography evaluation is to describe a social unit as it exists
in its natural setting. The major data gathering technique in this design include participants’ observation.
Visual recording of events e.g diagrams, photographs and video tape may be used.
10.3.2.3. Phenomenological Design: This design describes the meaning of a lived experience. The
evaluator sets aside all the prejudgments and collects data on how individuals make sense out of a
particular experience or situation. The techniques used in data collection in this design is long
interviews directed towards understanding the subjects perspectives on their everyday lived experience
with the phenomenon (McMillan and Schumacher, 2001). Phenomenological study studies enable
readers to feel that they understand more fully the concept related to the particular experience.
Monitoring and evaluation may use various tools for data collection such as format interviews, literature
review, questionnaire and surveys, in-depth interview, focus group discussions, document reviews, field
work reports case studies, participants’ observations, community meetings.
91
Methods Descriptions /purpose Advantages Disadvantages /challenges
Literature search Gather background § Economic and efficient way of § Difficult to assess validity and reliability of
information on evaluation obtaining information secondary data
methods used others
Questionnaires Oral interviews or written § Produce reliable information § Demanding and could be costly
and surveys questionnaires of a
representative sample of § Can be completed § Might not get careful feedback
despondence anonymously
§ Wording can bias clients’ responses
§ Easy to compare and analyze
§ Data is analyzed for groups and are
§ Can be administered easily to a impersonal
large number of people
§ Surveys may need sampling experts
§ Collect a lot of data in an
organized manner § Provide numbers but do not get the full story
Interviews To fully understands § Give fool range and depth of § Can be hard to analyze and compare
someone’s impressions or information and yield rich data
experiences or learn more and details and new insights § Interviewers can bias responses
about their answers to
questionnaire Individuals or § Can be flexible with the client § Can be expensive and time consuming
group interviews could be
organized to assess § Permit face-to- face contact with § Need well qualified and
perceptions views and respondent and provide opportunity
satisfaction of beneficiaries to explore topics in depth § highly trained interviewers
92
useful responses
§ Flexibility results in inconsistencies across
§ Allow interviewer to be flexible in interviews
administering interviews to
particular individuals or § Volume of information too large
circumstances and may be difficult to reduce data
Document Review Impression of how programme § Give impression and historical § Often takes a lot of time
operates without interrupting information
the programme by review of § Information may be incomplete. Quality of
applications finances, memos, § Does not interrupt programme or documentation might be poor
minutes etc. client’s routine in programme
§ Need to be clear about purpose
§ Information already exists
§ Not a flexible means to get data. Data
§ Few biases about information restricted to what already exists
Observations Involves inspections, field § Well-suited for understanding § Dependent on observer’s understanding and
visits and observation to process views, operation of the interpretation
understand process, programme while they are actually
infrastructure/services and their occurring § Has limited potential for generalization
utilization
§ Can adapt to events as they occur § Can be difficult to
Gathers accurate information and exist in natural, unstructured
about how a programme and flexible setting § interpret exhibited behaviuor Can be complex
actually operates particularly to categories observations
about processes § Provide direct information about
behaviour of individuals and § Can influence behaviour of programme
groups participants
§ Permits evaluator to enter into and § Can be expensive and time consuming
understand situations/context
§ Needs well qualified, highly trained observers
§ Provide good opportunity for and all content experts
93
identifying un anticipated
result. § Investigators have little control of the
situation
Focus Groups A focus group brings together § Efficient and reasonable in § Can be hard to analyze responses.
a representative of 8 – 10 terms of cost
People who are asked a series § Need good facilitators
of questions related to the task § Stimulates the generation of new
at hand ideas § Difficult to schedule 8 - 10 people
Case study In-depth view of one or a § Well-suited for understanding § Usually time consuming to collect, organize
small number of selected processes and for formulating and describe.
cases hypothesis to be tested later
§ Represents depth of information, rather than
To fully understand or depict § Fully depicts client’s breadth
experience in programme
beneficiaries’ experience in a input, process and results
project, and conduct
94
comprehensive examination
through cross comparison of § Powerful means to portray
cases programme to outsiders
Key Interviews with persons who § Flexible, in- depth approach. § Risk of biased presentation/interpretation from
informants are knowledgeable about the informants /interviewer
interviews community targeted by the § Easy to implement
project § Time required to select and get commitment
§ Provides information concerning may be substantial.
A Key informant is a person causes, reasons and/or best
(or group) who has unique approaches from an ‘insider’ point § Relationship between evaluator and informants
skills or professional back – of view may influence type of data obtained.
ground related to the
issue/intervention being § Advice/feedback increases § Informants may interject own biases and
evaluated, is knowledgeable credibility of study impressions
about the project participants
and / or has access to other § May have side benefit to solidify
information of interest to the relationships between evaluator,
evaluator beneficiaries and other stakeholders
Direct measurement Registration of quantifiable or § Precise § Registers only facts and not explanations
classifiable data by means of
analytical instrument § Reliable and often requiring
few resources
Source: Information on common qualitative methods is provide in earlier user friendly Handbook for project evaluation (NSF 93-
152)
95
10.5 Preparation of monitoring and evaluation plan (proposal)
An evaluation plan is a framework that clarifies key elements of a proposed evaluation. Ideally, this
is a stage that involves evaluation proposal preparation. Before one embarks on serious evaluation
proposal development, one needs to ask the following questions:
There are many ways of writing an M&E proposal. The most common way are explained
below;
i. Preliminary Pages
Title page may contain: the name of project, programme or theme being evaluated; the Country/ies
of project/programme or theme; name of the organization to which the report is submitted; names
and affiliations of the evaluators and the date. Finally in the preliminary pages we have table of
contents and acronyms and abbreviations
Step Two: This constitutes chapter one of the proposal that contains the introduction to the
evaluation which may include the following areas:
i. Context of the evaluation: Briefly describe the project to establish whether it is new,
developing or firmly established
ii. Purpose of the evaluation: After describing the context of evaluation make a statement of
need and then state the purpose of the evaluation. Before writing the purpose consider the
following:
§ why is this evaluation important?
§ what are the implication of the evaluation and how does it relate to the future work of
the area?
96
iii. Evaluation question and objectives: These are derived from the statement of the purpose of
the evaluation. Examples of evaluation questions are:
§ Are the planned activities actually being carried out?
§ Is the programme achieving its intended objectives
§ How does the project compare with alternative strategies for achieving the same ends?
v. Limitation and Delimitation of the evaluation: A single evaluation may not cover all the
aspect of interest. It can be limited to certain type of projects, geographical areas etc. These
should be stated and justified. This section describes the limits or the scope of the study. The
evaluator should give reasons why they are not extending beyond the determined scope
vi. Assumption of the evaluation: An evaluator should indicate those factors that operate in
his/her study out of which he/she will assume that will not affect the evaluation results. Such
factors should be those the evaluator can do nothing about through sampling or studying and
therefore has to accept to live with them. For Example, an evaluator will assume that the
evaluation participants will give honest and frank answers.
vii. Definition of the significant terms: This should be restricted only to those terms which may
convey different meanings to different people. Such definitions are sometimes called
operational definitions. There are other terms which are not observable but which can only
be inferred by subject behaviour when faced with a specific situation. Such terms are called
constructs.
viii. Evaluation Model: Evaluator needs to look at the entire evaluation and make decision on
the type of evaluation model that fits it (see lecture Four).
ix. Conceptual framework: The evaluator needs to explain a framework that shows the
interrelationship of independent and dependent variables under the evaluation. These helps
in focusing the evaluation.
x. Outline of the final evaluation report: All the chapters for writing the evaluation report
must be highlighted on
Step three: This is referred to as chapter two. It contains the description of the project being
evaluated. These may include:
97
• Day the project was started
• Philosophy behind the programme
• Types of beneficiaries for who the project is designed
• Project out come
• Project scheduling
• Content
• Administrative and management procedures
Step Four: This is known as chapter two of the proposal. It entails review of previous Evaluation
studies related to the evaluation. The section is very important because it helps one to understand
various methods of evaluation that were used elsewhere and the kind of results that were realized
Step Five: This step entails the methodology that will be used in the evaluation. It constitutes
chapter three which is concerned with Evaluation design and methodology. It touches on:
• Evaluation Design
• Target population and sample
• Description of the sample
• Description of the instruments
• Data collection
• Data analysis plan
• Work Plan
• Budget
98