PHT 422 Programme Monitoring and Evaluation Corected

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 79

PHT422: HEALTH PROGRAMME MONITORING AND EVALUATION

MASENO UNIVERSITY
SCHOOL OF PUBLIC HEALTH AND COMMUNITY DEVELOPMENT
(SPHCD)
BSC PUBLIC HEALTH WITH IT
Year 2 semester 2
COURSE OUTLINE
COURSE FACILITATOR: D. Masinde
E-MAIL ADDRESS:dmasinde2004@yahoo.com
1. Course Code: PHT422:
2. Course Title: HEALTH PROGRAMME MONITORING AND EAVALUATION
(ST- 1 UNITS)
3. Introduction: Monitoring and Evaluation are important tools which an organization can
use to demonstrate its accountability, improve its performance, increase its abilities for
obtaining funds or future planning and fulfill the organizations objectives. By
communicating the results of the evaluation, your organization can inform its staff, board
of directors, service users, funders, the public, or other stakeholders about the benefits,
efficiency, impact, lessons learnt and effectiveness of organizations services and
programs.
4. Course description : Concepts and Principles of Planning, Monitoring and
Evaluation: Introduction to Planning, Monitoring and Evaluation, relationship between
monitoring and evaluation, defining program components, different types of M&E; Project
design-project life cycle, stakeholder analysis and management, project control, critical path
planning and project resources scheduling and project flowcharts
Monitoring & Evaluation Frameworks: M&E frameworks: conceptual frameworks, logical
frameworks, result frameworks and M&E plan, Developing indicators, Measurement of results,
supervision of performance monitoring, planning and implementing participatory monitoring
Evaluation Processes: Planning an evaluation activity, data quality, Designing an evaluation
and Conducting an evaluation, impact assessment
Data Analysis And Report Writing: Qualitative and Quantitative data analysis; process,
methods, interpretation, presentation; Report writing and presentation skills; Designing Health
and Information Systems and Management Information Systems (HIS/MIS); Emerging issues in
M&E and HIS/IMS. Economic evaluations: CBA, CUE,CEA

PHT 422: Health Programme Monitoring and Evaluation David Masinde


Course Objectives:
i) Describe the basic concepts in monitoring and evaluation of public health projects
ii) Explain the methods of project monitoring and evaluation
iii) Discuss the role of context in Monitoring and Evaluation
iv) Acquire the skills necessary for designing and implementing a monitoring and
evaluation process of health project.

COURSE CONTENT
LESSON TOPIC/SUB-TOPIC CONTENT
1 Introduction to monitoring and evaluation Purpose, types of M/E,
approaches of M/E,
2 Logical framework matrix Principles of M/E, logical
framework, steps for
designing logical
frameworks
3 Steps for designing an M/E system Readiness assessment,
agree on outcomes to
monitor, Construct
indicators, TORs,
Evaluation questions,
evaluation tools
4 Steps for designing M/E system Data collection tools,
ethical review,Analysing
data,
5 CAT CAT1
6 Information gathering in M/E Quantitative and qualitative
data, means of gathering the
information, tools of data
collection, Analyzing data
7 M/E information use Users of M/E information,
8 M/E report writing, and communication of CBA,CUE,CEA
findings ,Economic evaluations
CAT2
9 CAT2 CAT2
10 M/E Case studies Malaria, TB, VCT,
Training programmes,

EVALUTION
CATS and Assignments -40%
Final exam-60%

PHT 422: Health Programme Monitoring and Evaluation David Masinde


Total- 100%
Pass mark- 40%
References
Core texts:
Gitonga B. A (2012). Project Monitoring, Evaluation, Control & Reporting. Community
development approach. PSI, Nairobi
Gitonga B. A (2012). Project planning and Management. PSI. Nairobi
Murphy, J. and Marchant. (1998). Monitoring and Evaluation in Extension Agencies.
World Bank. Technical ppr No. 79, World Bank, World Washington DC
World Bank, (1997). The log frame handbook. A logical framework approach to project
cycle management. World Bank. 2003a. Extension and Rural Development- converging
views on institutional approaches.

PHT 422: Health Programme Monitoring and Evaluation David Masinde


PHT422: MONITORING AND EVALUATION

CHAPTER ONE: INTRODUCTION TO MONITORING AND EVALUATION

Concepts
Monitoring and Evaluation are important tools which an organization can use to demonstrate its
accountability, improve its performance, increase its abilities for obtaining funds or future
planning, and fulfill the organizational objectives. By communicating the results of the
evaluation, your organization can inform its staff, board of directors, service users, funders, the
public, or other stakeholders about the benefits and effectiveness of your organization's services
and programs, and explain how charities work and how they are monitored.
Project evaluation and project management are interrelated. Evaluation can help you complete a
project successfully, provide evidence of successes or failures, suggest ways for improvements,
and inform decisions about the future of current and planned projects.

Project evaluation is an accountability function. By evaluating a project, you monitor the process
to ensure that appropriate procedures are in place for completing the project on time, and you
identify and measure the outcomes to ensure the effectiveness and achievements of the project.

Monitoring
Monitoring is the systematic collection and analysis of information as project progresses.
 It is aimed at improving the efficiency and effectiveness of a project or organisation.
 It is based on targets set and activities planned during the planning phases of work. It
helps to keep the work on track, and can let management know when things are going
wrong.
 If done properly, it is an invaluable tool for good management, and it provides a useful
base for evaluation- monitoring is an input to project evaluation.
 It enables you to determine whether the resources you have are sufficient and are being
well used, whether the capacity you have is sufficient and appropriate, and whether you
are doing what you planned to do.

Monitoring of project involves the following activities.


 Establishing indicators of efficiency, effectiveness and impact;
 Setting up systems to collect information relating to these indicators;
 Collecting and recording the information; Analyzing the information;
 Using the information to inform day-to-day management.
 Monitoring is an internal function in any project or organisation.

Types of Monitoring
Project Monitoring and evaluation can be categorized according to elements or levels measured
or according to the person or institution which takes lead in setting up, managing and using the
system.

Types of monitoring according to elements tracked/measured


1. Traditional monitoring
This focuses on implementation (monitoring at operational level of the project) monitoring
which involves tracking inputs (financial resources, human resources, materials, strategies),
activities (what actually took place) and outputs (the products or services produced). This
approach focuses on monitoring how well a project, program or policy is being implemented and
is often used to assess compliance with work plans and budgets.
2. Result Based Monitoring
This is a continuous process of collecting and analyzing information to compare how well
project, program or policy is performing against expected results. This guide enables users to
undertake results based monitoring and evaluation.

Types of monitoring and evaluation according to leadership and utilization


1. Conventional monitoring :
This is where people who are not part of the community-such as donor representatives or
external consultants-are primarily responsible for identifying needs, developing a general project
concept, providing money and other resources, then monitoring and evaluating project activities.
4

PHT 422: Health Programme Monitoring and Evaluation David Masinde


2. Participatory monitoring
This supports active involvement within the monitoring and evaluation process for those who
have a vested interest in an intervention. Such stakeholders include beneficiaries (primary
stakeholders), service providers, partners (donors, governments and civil society organizations)
and customers, along with any other interested parties.

Concept of Project Evaluation


Evaluation is a systematic investigation of the worth or significance of a project. Evaluation
normally involves some standards, criteria, measures of success, or objectives that describe the
value of the object. Evaluation can identify criteria for success, lessons to learn, things to
achieve, ways to improve the work, and the means to move forward.

Project evaluation involves Assessment of activities that are designed to perform a specified task
in a specific period of time.
Evaluation is the comparison of actual project impacts against the agreed strategic plans. It looks
at what you set out to do, at what you have accomplished, and how you accomplished it.!
What monitoring and evaluation have in common is that they are geared towards learning from
what one is doing and how one is doing it, by focusing on the following aspects:

 Efficiency tells you that the input into the work is appropriate in terms of the output. This
could be input in terms of money, time, staff, equipment and so on. When you run a
project and are concerned about its Replicability or about going to scale, then it is very
important to get the efficiency element right.
 Effectiveness is a measure of the extent to which a development programmes or project
achieves the specific objectives it set. If, for example, we set out to improve the
qualifications of all the high school teachers in a particular area, did we succeed?
 Lessons learnt
 Impact tells you whether or not what you did made a difference to the problem situation
you were trying to address.

Types of Evaluations
Different forms of evaluations are conducted by development actors. Evaluations may be
categorized according to the following;
1) The person takes the lead or participates in the evaluation,
2) Level of emphasis
3) Time at which an intervention is evaluated
4) Nature of intervention being evaluated.

 Category according to leadership in evaluation


In terms of who takes the lead, evaluations can be categorized as follows;
i. Participatory (where stakeholders shape and use the evaluation outcomes).
ii. Conventional evaluation (led by external agents).

 Project evaluation according to the project life


Evaluation can be conducted at different time of the project as explained below;
1) Before implementation to ascertain feasibility of the plan intervention also called project
appraisal
2) During the course of project implementation (mid-term and
3) At the end of a project-summative evaluation), at the end of the project (end of project
evaluation)
4) Long after project has been implemented (impact evaluation).

Important notes
 Evaluation could focus on implementation :
1) Traditional
2) process evaluation
3) Implementation together with the results (results based evaluation).
 In terms of intervention being implemented:
1) project,
2) programme,
5

PHT 422: Health Programme Monitoring and Evaluation David Masinde


3) Strategy or institutional specific.

Evaluation based on leadership:


 External evaluation: This is an evaluation done by a carefully chosen outsider or
outsider team
 Internal evaluation: evaluation is carried out by the project team from within the
organization.

Advantages and Disadvantages of Internal and External Evaluations


Evaluation Advantages Disadvantages
 The evaluators are very familiar  The evaluation team may have
with the work, the organizational a vested interest in reaching
Culture and the aims and Positive conclusions about the
objectives. work or organisation. For this
Internal reason, other stakeholders,
evaluation  Sometimes people are more Such as donors, may prefer an
willing to speak to insiders than to external evaluation.
outsiders.
 The team may not be
 An internal evaluation is very specifically skilled or trained in
clearly a management tool, a way evaluation.
of self-correcting, and much less
threatening than an  The evaluation will take up a
considerable amount of
 External evaluation. This may organizational time - while it
make it easier for those involved to may cost less than an external
accept findings and Criticisms. evaluation, the opportunity
 An internal evaluation will cost costs (see Glossary of Terms)
less than an external evaluation. may be high.

 The evaluation is likely to be more  Someone from outside the


objective as the Evaluators will organisation or project may
External have some distance from the work. not understand the culture or
evaluation even what the work is trying
 The evaluators should have a range to achieve.
of evaluation skills and experience.  Those directly involved may
feel threatened by outsiders
(done by a team or  Sometimes people are more and be less likely to talk
person with no willing to speak to outsiders than openly and cooperate in the
vested interest in to insiders. process.
the project)  External evaluation can be
 Using an outside evaluator gives very costly.
greater credibility to findings,  An external evaluator may
particularly positive findings. misunderstand what you want
from the evaluation and not
give you what you need.

Qualities to look for in an external evaluator or evaluation team:


 An understanding of development issues.
 An understanding of organizational issues.
 Experience in evaluating development projects, programmes or organizations.
 A good track record with previous clients.
 Research skills.
 A commitment to quality.
 A commitment to deadlines.
 Objectivity, honesty and fairness.
 Logic and the ability to operate systematically.
 Ability to communicate verbally and in writing.
 A style and approach that fits with your organization.
6

PHT 422: Health Programme Monitoring and Evaluation David Masinde


 Values that is compatible with those of the organization.
 Reasonable rates (fees), measured against the going rates.

Factors to consider when selecting external evaluation


 When you decide to use an external evaluator consider the following factors;
 Check his/her/their references.
 Meet with the evaluators before making a final decision.
 Communicate what you want clearly - good Terms of Reference are the foundation of a
good contractual relationship.
 Negotiate a contract which makes provision for what will happen if time frames and
output expectations are not met.
 Ask for a work plan with outputs and timelines.
 Maintain contact - ask for interim reports as part of the contract - either verbal or
written. Build in formal feedback times.
 Do not expect any evaluator to be completely objective. S/he will have opinions and
ideas - you are not looking for someone who is a blank page! However, his/her opinions
must be clearly stated as such, and must not be disguised as "facts". It is also useful to
have some idea of his/her (or their) approach to evaluation.

Project Evaluation on Basis of Purpose


Approaches for project evaluation based on goals, decision and expertise
Approach Major purpose Typical focus questions Likely methodology
Goal based Assessing achievement Were the goals Achieved? Comparing baseline
of goals and objectives. Efficiently? Were they the and progress data;
right Goals? finding ways to
measure indicators
Decision Providing information. Is the project effective? Assessing range of
making Should it continue? How options related to the
might it be modified? project context,
inputs, Process and
product. Establishing
some kind of
decision-making
consensus
Goal free Assessing the full range What are all the outcomes? Independent
of project effects, What value do they have? determination of
intended and needs and standards to
unintended. judge Project worth.
Qualitative and
Quantitative
techniques to uncover
any possible results.
Expert Use of expertise. How an outside Critical review based
judgment Professional does rates this on experience,
project? informal surveying,
and subjective
insights

Forms of Evaluation
Depending on the above criteria explained, the Forms of Evaluation can be explained as follows;
1) Ex-ante or prospective Evaluation: An evaluation that is performed before
implementation of development intervention (Project appraisal).

2) Ex-post evaluation: Evaluation of a development intervention after it has been


completed. It may be undertaken directly after or long after completion. The intention is
to identify factors of success or failure, to assess the sustainability of results and impacts,
and draw conclusions that may inform other interventions.

3) Mid-term evaluation: Evaluation performed toward the middle of the period of


implementation of an intervention.
7

PHT 422: Health Programme Monitoring and Evaluation David Masinde


4) Formative Evaluation: Evaluation intended to improve performance, most often
conducted during the implementation phase of the project or programme.

5) Summative Evaluation: An evaluation that is conducted at the end of an intervention (a


phase of that intervention) to determine the extent to which anticipated outcomes were
produced. Summative evaluation is intended to provide information about the worth of
the programme.

6) Process evaluation: An evaluation of the internal dynamics of implementing


organizations, their policy instruments, their service delivery mechanisms, their
management practices, and the linkages among these - refer to formative evaluation.

Project evaluation helps you understand the progress, success, and effectiveness of a project. It
provides you with a comprehensive description of a project, including insight on the;
i. Needs your project will address;
ii. People who need to get involved in your project;
iii. Definition of success for your project;
iv. Outputs and immediate results that you could expect;
v. Outcomes your project is intended to achieve;
vi. Activities needed to meet the outcomes; and
vii. Alignment and relationships between your activities and outcomes.

Activities in Project Evaluation:


Looking at what the project or organization intended to achieve - what difference did it want to
make? What impact did it want to make?, for this refer to project design and related management
questions.
1) Assessing its progress towards what it wanted to achieve, its impact targets.
2) Looking at the strategy of the project or organization. Did it have a strategy? Was it
effective in following its strategy? Did the strategy work? If not, why not?
3) Looking at how it worked, Was there an efficient use of resources? What were the
opportunity costs of the way it chose to work? How sustainable is the way in which the
project or organization works? What are the implications for the various stakeholders in
the way the organization works?
4) In an evaluation, efficiency, effectiveness and impact are determined.

Uses of Project Evaluation Information


 Identify ways to improve or shift your project activities;
 Facilitate changes in the project plan;
 Prepare project reports (e.g., mid-term reports, final reports);
 Inform internal and external stakeholders about the project;
 Plan for the sustainability of the project;
 Learn more about the environment in which the project is being or has been carried
out;
 Learn more about the target population of the project;
 Present the worth and value of the project to stakeholders and the public;
 Plan for other projects;
 Compare projects to plan for their futures;
 Make evidence-based organizational decisions;
 Demonstrate your organization's ability in performing evaluations when searching for
funds; and
 Demonstrate your organization's concerns to be accountable for implementing its plans,
pursuing its goals, and measuring its outcomes.

PHT 422: Health Programme Monitoring and Evaluation David Masinde


Difference between Monitoring and Evaluation
Monitoring Evaluation
Keeps track of daily activities Takes the long-range view.
Accept policies, rules, and procedures Questions policies , rules, procedures
Works toward targets Measure progress and ask whether targets are met
Stress the conversion of inputs to outputs Emphasizes on achievement of purpose
Concentrate on planned elements Assesses planned elements and
Looks for unplanned change.
Searches for causes and Challenges assumptions
Reports progress Records lessons learned

In building Monitoring & Evaluation systems, the following actions are essential:
 Formulation of outcomes and goals
 Selection of outcome indicators to monitor
 Gathering baseline information on the current condition
 Setting specific targets to reach and dates for reaching them
 Regularly collecting data to assess whether the targets are being met
 Analyzing and reporting results

Basic Principles of Monitoring and Evaluation


1) Good monitoring focuses on results and follow-up. It looks for what is going well' and
what is not progressing' in terms of progress towards intended results. It then records
this in reports, makes recommendations and follow-up with decisions and actions.

2) Good monitoring depends to a large measure on good design. If a project is poorly


designed or based on faulty assumptions, even the best monitoring is unlikely to ensure
its success.

3) Good monitoring requires regular visits by staff that focus on results and follow-up to
verify and validate progress, in addition, the Programme. Manager must organize visits
and/or bilateral meetings dedicated to assessing progress, looking at the big picture
and analyzing problem areas.

4) Regular analysis of reports such as the annual project report (APR) is another minimum
standard for good monitoring.
5) Good monitoring finds ways to objectively assess progress and performance based on
clear criteria and indicators. To better assess progress towards outcomes, country offices
must make an effort to improve their performance measurement system by developing
indicators baselines.
6) Assessing the relevance, performance and success of project development interventions
also enhances monitoring.

PHT 422: Health Programme Monitoring and Evaluation David Masinde


CHAPTER TWO: STEPS FOR DESIGNING MONITORING AND EVALUATION
SYSTEM
Project monitoring and evaluation is a systematic and logical process.
1) Conduct a Readiness Assessment
a) This is important as a precondition for take-off. In this step organizations should
establish:
 What potential pressures are encouraging the need for M & E system within the
organization?
 Who is the advocate for an M& E system within the organization?
 What is motivating the champion to support such an effort?
 Who will own the system? How will they benefit from the system? How much
information do they really want?
 How will the system directly support better resource allocation and the achievement of
program goals?
 How will the organization, the champions and staff react to negative information
generated by the M & B system?
 Does capacity exist to support a result based M & E system
 How will the M & E system link project, program sector and the national goals?

2) Agreeing on outcomes to monitor and evaluate.


Establishing outcomes, will illustrate what success looks like, this is different from indicators
which are only relevant when measured against an objective. Outcomes will demonstrate
whether success has been achieved, and the road to take. In choosing outcomes to monitor and
evaluate, consider the following factors:
 What are the strategic priorities?
 What are the desired outcomes?
 What is the international economic development like?
 How is the political system?

The process of choosing outcomes involves building a participatory and consultative process
involving stakeholders. To set and agree upon outcomes, follow the following steps:
 Identify specific stakeholder representatives
 Identify major concerns of stakeholder groups
 Translate problems into statements of possible outcome improvements
 Disaggregate to capture key desired outcomes

3) Select key performance indicators to monitor outcomes


It is important to translate outcomes into a set of measurable performance indicators it is through
the regular measurement of key performance indicators that we can determine if outcomes are
being achieved. Indicator selection is a complicated process in which the interest of several
relevant stakeholders needs to be considered and reconciled. Good performance indicators
should be clear, relevant, economic, adequate, and monitorable.
1) Clear - Precise and unambiguous
2) Relevant - Appropriate to the subject at hand
3) Economic - Available at a reasonable cost
4) Adequate - Provide sufficient basis to assess performance
5) Monitorable - Amenable to independent validation

4) Planning for improvement- selecting results targets.


Targets are specified objectives that indicate the number, timing and location of that which is to
be released. Targets are the quantifiable levels of the indicators that a country, society or
organization wants to achieve by a given time. To establish targets, start with the baseline
indicator level and include the desired level of improvement, taking into consideration available
resources over a specific time period. In setting targets consider the following:
 Previous performance
 Expected funding and resource levels
 Target time frame
 Political nature of the target setting process
 The required level of flexibility

10

PHT 422: Health Programme Monitoring and Evaluation David Masinde


5) Deciding on the type of M/E to use.
The organization should choose whether they want the evaluation to centre on the activities or
results. Activity based monitoring focuses on the activity. Activity Based Monitoring seeks to
ascertain that the activities are being implemented on schedule. These activities however not
aligned to the outcomes.
Results Based Monitoring looks at the overall goal / impact of the project and its impacts on
society. Implementation monitoring is concerned with tracking the means and strategies i.e.
inputs, activities and outputs in the work plans, used to achieve a given outcome. Implementation
monitoring looks at outputs, inputs and activities.

 Monitoring the Project


A monitoring mechanism ensures that your project's planned activities are being completed in a
timely fashion. This mechanism is, in fact, part of the project management and provides useful
information for any type of evaluation.
Examples of Monitoring Mechanisms
 An electronic filing system to organize all communications, reports, minutes of meetings,
and any other existing documents that can help you keep track of your project activities.
 Document logs, including activity logs, to track the events and progress of the project
activities, and contact logs to record the time and details of contacts.
 Tracking software for project documents or recording website and other technology
related project activities.

Activity monitoring log


Title of project

Type of activity
Number of events

Start
Finish
Location (s)
Participants
Age range
Gender
Other specifications (e.g. education,
social / economic status, ethnicity)

Outputs

Resources used (for preparation and


conduction)

Time
Budget
Staff
Amendments
Comments

6) Deciding on the Evaluation and Its Methodology


Designing an evaluation process means being able to develop Terms of Reference for such a
process (if you are the project or organization) or being able to draw up a sensible proposal to
meet the needs of the project or organization (if you are a consultant). The main sections in
Terms of Reference for an evaluation process usually include:
 Background/project description: This is background to the project or organisation,
something about the problem identified, what you do, how long you have existed, why
you have decided to do an evaluation.
11

PHT 422: Health Programme Monitoring and Evaluation David Masinde


 Purpose: Here you would say what it is the organisation or project wants the evaluation
to achieve.
 Key evaluation questions: What the central questions are that the evaluation must
address.
 Specific objectives: What specific areas, internal and/or external, you want the
evaluation to address. So, for example, you might want the evaluation to include a review
of finances, or to include certain specific programme sites.
 Methodology/determining the tools: here you might give broad parameters of the kind
of approach you favour in evaluation (see the section on more about monitoring and
evaluation). You might also suggest the kinds of techniques you would like the
evaluation team to use.
 Defining the indicators
 Identification project evaluation stakeholders
 Logistical issues: These would include timing, costing, and requirements of team
composition and so on.

 Background/project description
This involves writing a project description that will give a clear understanding of the project
from the start before undertaking evaluation. Project description includes the following;
1) The needs and objectives that the project will address;
2) The target group that will take action in this project;
3) The target group that will be affected by the project;
4) The planned outcomes of the project; and
5) The activities that are required to meet those outcomes.

 Stating an Evaluation Purpose


The purpose statement presents the reasons that led you to conduct this evaluation, so you should
already have the information you need to prepare it. As an evaluator, you only need to clarify
and write it as a statement. This statement should echo the goals, values and significance of the
project from either its funder's or your organization's perspectives. The evaluation purpose
statement can also determine the type of evaluation you undertake. If the purpose is to
demonstrate how the project is meeting its objectives, using its resources, and whether any
modifications in its process are required, you should conduct a process evaluation. If the purpose
is to assess the extent to which the project has affected its participants or environment, then you
should conduct an outcome evaluation. Examples of Evaluation Purpose Statements
 To assess the degree to which project objectives were achieved.
 To document the lessons learned.
 To provide recommendations for project development and improvement.
 To examine the changes that resulted from doing the project.
 To provide input to guide decision making for the upcoming renewal and extension of
project funding.

Note:
The purpose of an evaluation is the reason why you are doing it. It goes beyond what you want to
know to why you want to know it. It is usually a sentence or, at most, a paragraph. It has two
parts:
i. What you want evaluated;
ii. To what end you want it done.

 Choosing Evaluation Questions


Evaluation questions are the key questions that you need to answer to ensure the successful
completion of your project or to understand its impact, effectiveness, and achievements. These
questions determine what is important to be addressed or assessed. They direct you to the type of
evaluation that is required. Asking and answering the right questions will lead to useful
evaluation results that can be easily communicated with external audiences or put to use in your
organization. Evaluation questions play a crucial role in the analysis and interpretation of the
data you collect.
How to choose evaluation questions
1) Review the objectives, activities, and anticipated outcomes of the project.
2) Ask your stakeholders what questions this evaluation should answer.

12

PHT 422: Health Programme Monitoring and Evaluation David Masinde


3) Identify the level of details that are required for this evaluation (e.g., whether the
evaluation should be designed based on the individual activities or components of the
project).
4) Prepare a list of questions to determine the value and significance of various aspects of
the project.
5) For each question, identify whether it relates to the process of the project, to its outputs
and immediate results, to the outcomes and changes the project could create for its
participants and environment, to the lessons learned and points that can affect future
planning and decision making, or to the new ways of work and innovations.
6) Select questions that are directly associated with at least one project objective. Their
answers can verify the project's achievements or success.
7) Select questions that are related to the future of the project. Their answers can lead to
ways to make the project - and other projects - sustainable.

Examples of Evaluation Questions


The important factors considered under this include;
 Project process
 Project outputs
 Project outcome/impact
 Lesson learned

1) Evaluation questions related to process:


 Are the activities being performed as planned?
 Is the project reaching the intended target population?
 How satisfied are the participants with their involvement in this project?
 How should the planned activities be modified to work better?
 What lessons can we learn from the way in which the project is unfolding?

2) Evaluation questions related to outputs:


 Is the project reaching the intended number of participants?
 Is the project providing the planned services?
 Are the activities leading to the expected products?
 Are there any unexpected products?
3) Evaluation questions related to outcomes/impacts:
 Did the participants experience any changes in their skills, knowledge, attitudes,
or behaviours?
 What changes were expected?
 What are the effects of the project on my organization (e.g., organizational pride,
enhanced networking, and partnerships)?
 Did the project meet the needs that led to this project? Do those needs still exist?
 Are there any other related needs that have arisen that the project did not address?
 Did we experience any changes as a result of the project? Are the changes
positive?
 What could be the long-term impacts of this work?

4) Evaluation questions related to alternatives and lessons learned:


 What could have been done differently to complete the project more effectively?
 What key changes should be made to the project to enhance achievement of
objectives?
 What are the lessons learned for the future?
 What outcomes should be considered if an organization wants to repeat this or
conduct a similar project?

Characteristics of effective evaluation questions


 Thought provoking
 Challenges assumptions.
 Focuses inquiry and reflection.
 Raises many additional questions.

 Choosing Evaluation Tools


13

PHT 422: Health Programme Monitoring and Evaluation David Masinde


Evaluation tools help you gather the information you need to answer your evaluation questions.
They can be different from the tools you use to carry out the core activities of the project.

Evaluation tools can use both formal and informal methods for gathering information. Formal
evaluation tools include focus groups, interviews, survey questionnaires, and knowledge tests.

Informal evaluation tools include observations, informal conversations, and site visits.
Depending on your evaluation questions, you may need a tool that helps you gather quantitative
information by numbering, rating and ranking information.

Factors to consider when choosing evaluation tools


1. Reviewing your evaluation questions and project activities.
2. Completing a copy of the Evaluation Tools Matrix provided in Appendix 1.
 Enter your evaluation questions in the first column.
 Think about the information you need to answer these questions.
 Check the tools required for gathering the necessary information to answer each
question.
 Identify whether the tools are available and need modification, or if you need to
develop them.
 Thinking about the information that you can gather by using each tool. You may
use a tool to address more than one question.
3. Discussing your completed matrix with the evaluation group and project team.
4. Learning more about the selected tools and, if necessary, make sure that there are enough
internal resources (i.e., time, skills, and budget) to develop them.
5. Searching for external resources and learning materials if the internal resources are
insufficient.

Evaluation Description of the tool Type of Tool


tools Informal Formal quantitativ Qualitative
e
Survey Set of pre determined X x
questions about certain
topics that are answered by
the target audience
Interview A set of questions (could X X
be predetermined or
not )about a certain topics
that are posed to a target
audience and followed by
additional questions and
conversations
Knowledge/ A set of questions that X x
skill tests determines the level of
knowledge or skills in
project participants
Focus group Group discussion with X X
discussion relatively small number of
selected people about a
certain question
Evaluation A set of questions that X x
form determines the participants'
opinions, attitudes and
understanding once a
project activity is
complete.
Self report of daily X X
Journal activities by project
recording participants
Onsite visits Combination of X X
observation and interview
that occurs in the project
14

PHT 422: Health Programme Monitoring and Evaluation David Masinde


environment.
Activity log Staff report of daily X x
activities.
Observation Notes taken through direct X X
notes observation of verbal and
non-verbal behavior that
occurs in project activities
Administrative records of X x X
Organisation project activities e.g.
documents reports, minutes,
registration forms etc
Anecdotal Stories and narratives X X
record about an event, an
experience, or an
individual described by
project staff or participants

 Identifying Evaluation Sources


Evaluation sources are the materials or people that will help you gather information. These can
include the project documentation and files, and the project participants, staff, and members of a
committee. Your evaluation plan should specify these sources and explains how and when you
approach these sources.
To identify evaluation sources, you need to review the evaluation tools that are required for this
evaluation and decide to whom these tools must be applied (e.g., workshop participants).

 Determining the project evaluation budget


You should plan your budget in a way that makes your evaluation realistic, manageable,
efficient, and productive. In some cases, projects have a fixed budget and evaluators need to
adjust their activities to that budget. In other cases, evaluators need to develop a budget.

Example of budget framework for project evaluation


Activity Position in Number of Cost per day Total cost
charge days
Management and direction
Evaluation planning
Contact stakeholders
Holding planning meeting
with evaluation group
Developing evaluation draft
Discuss and evaluate the
evaluation plan
Total
Evaluation implementation
(develop tools and gather
data)
Recruit staff
Identify and recognize
existing tools
Developing monitoring
system
Implement and maintain
monitoring system
Develop new evaluation
forms and interview
Test the newly developed
tools
Implement the tools and
gather data
15

PHT 422: Health Programme Monitoring and Evaluation David Masinde


Total
Information analysis
Prepare data for analysis
Analyze data and interpret
results
Professional and technical
support (where applicable)
Interpret results
Hold discussion meetings
with evaluation group
Total
Complete interpretation of
results

Communication
Prepare and review an
evaluation report
Prepare presentation
Prepare other media related
materials(where applicable)
Present findings to various
stakeholders
Total
Travel and meetings
Related to the evaluation
group
Others
Total
Operating expenses
Photocopying / printing
Couriers
Phone / fax etc
Total
Grand total

Qualities of good evaluation


Good evaluations should satisfy the following qualities:
 Impartiality.
 Technical adequacy
 Usefulness
 Stakeholder involvement
 Feedback and dissemination
 Value for money

 Identification of project evaluation stakeholders


Stakeholders are the individuals or organizations that have an interest in your project; they may
make decisions, participate in the project activities, or be affected by those activities. Your
project may have both primary and secondary stakeholders. The primary stakeholders are those
who are closely and directly involved in or affected by the results of your project (e.g., the
participants themselves and an organization that has invested in your project}. Secondary
stakeholders are those who are less involved and less affected by your project but may have
some benefits in your project (e.g., an organization that is interested in knowing about the results
of your project).

How to identify evaluation stakeholders


1) Prepare a list of the individuals and organizations that have interests in the project and its
evaluation.
2) Determine their interests in this project and its evaluation.
3) Identify their information needs, particularly from this evaluation.
4) Identify their level of involvement in the project, based on their needs and interests.
16

PHT 422: Health Programme Monitoring and Evaluation David Masinde


5) Identify potential evaluation participants (i.e., primary stakeholders).
6) Invite participants to be part of the project evaluation group
7) Identify the potential users of the products of this evaluation (i.e., secondary
stakeholders).

Examples of Primary Stakeholders


 Project team members
 Project participants
 Funder(s)
 Your management staff
 Your board members
 Your volunteers

Examples of Secondary Stakeholders


 Members of the community in which the project is being conducted
 Members of the project's target population (e.g., youth, seniors, new citizens)
 Your organization's external members or partners
 Associations related to the topic of your project

 Selecting Evaluation Types


Selecting an evaluation type provides direction for your evaluation. It helps keep the evaluation
process focused on its main purpose and determines the evaluation questions that should be
answered and the data that should be collected. The most common types of evaluation are;
formative, process, summative, and outcome.
1) Formative evaluation is an ongoing evaluation that starts early in a project. It assesses
the nature of the project, the needs the project addresses, and the progress and
implementation of the project. It can identify major gaps in the project's content and
operational aspects (i.e., what was done and how) and suggest ways to improve them.
Categories of formative evaluation may includes;
 Performance logic Chain Evaluation-to determine the strength and logic of the
causal model behind the policy program or project. This evaluation addresses the
plausibility of achieving the desired change based on similar prior efforts. The
intention is to avoid failure from a weak design that would have little or no
chance of success in achieving the intended outcomes.

 Pre-implementation Assessment-to assess whether the objectives are well


defined whether the plan is coherent, whether resource deployment is well
structured. The intention is that Quality should be built into the intervention, not
inspected in it.

 Meta-evaluation-determining what we know about a system and with what


degree of confidence. Establishes the criteria and procedures for systematically
looking across those existing evaluations to summarize trends and to generate
confidence or caution in the cross study findings.

2) Process evaluation is used to monitor activities to make sure a project is being


implemented and completed as designed and on time. It can be complementary to
formative evaluation.
3) Summative evaluation is an overall assessment of the project's effectiveness and
achievements. It reveals whether the project did what it was designed to do. It provides
information for future planning and decisions and usually is completed when the project
is over. This type of evaluation usually does not directly affect the current project, but it
helps stakeholders decide the future of this or similar projects.
4) Outcome evaluation assesses the extent to which a project has achieved its intended
effects, and other effects it could have had on the project's participants or the
environment. It focuses on immediate, intermediate, or ultimate outcomes resulting from
the completion of the project.

Factors to consider when selecting the type of project evaluation


 The objectives and priorities of your project

17

PHT 422: Health Programme Monitoring and Evaluation David Masinde


 The purpose of the project evaluation
 The nature of the project (i.e., whether it is process-oriented or outcome-oriented)
 The time frame for conducting the evaluation (i.e., during or after the project)
 How, and by whom, the results will be used
 The time frame and budget for completing the evaluation

 Assigning responsibilities and implementing evaluation plan.


This step involves the following activities;
1) Engaging an evaluation group;
2) Acquiring skilled staff;
3) Obtaining support from the mother organization;
4) Determining evaluation ethical codes;
5) Identifying evaluation indicators;
6) Developing /preparing evaluation tools; and
7) Managing data collection.

 Engaging an Evaluation Group


Once you have the first draft of your evaluation plan, you need to establish an evaluation group
that consists of three to six staff, project manager, and few other stakeholders. The evaluation
group will assist you through the evaluation by;
a) Reviewing the progress of the work;
b) Providing advice;
c) Providing solutions to the issues that may raise; and
d) Supporting the use of the evaluation results.

Some of the important project evaluation skills


 Understanding of the concept and methods of evaluation;
 Understanding of the applied research;
 Planning and monitoring;
 Data analysis;
 Data collection and data management;
 Result interpretation;
 Analytical thinking;
 Critical thinking; and
 Report writing.

Increasing Organizational Support


Project evaluation requires a co-operative and collaborative atmosphere in your organization, and
financial and technical support. Examples of organizational support required for evaluation;
 Management support- To provide resources and to assist in decision-making and
modifications when facing challenges. Also, to support the use of evaluation results.
 Other staff support- To implement the evaluation plan.
 Climate of trust- To gather adequate and correct data.
 Technical support- To use appropriate software for developing tools and analyzing data.
Also, to use online technologies for communication.

How to increase organizational support


1) Let other staff members know about the project's evaluation and its activities.
2) Highlight the benefits that your organization can achieve from conducting an evaluation
and how it will pay off.
3) Explain the usefulness of the evaluation and how it can facilitate or improve the work of
staff.
4) Create or explain the positive links between your project evaluation and other functions
of your organization (e.g., fundraising, marketing, and communications).
5) Create a learning environment around evaluation.
6) Clarify the purpose of evaluation and the use of its results.
7) Share the evaluation results.

18

PHT 422: Health Programme Monitoring and Evaluation David Masinde


 Determining ethical conduct for project evaluations
As you implement your project evaluation, you may encounter ethical issues related to the topic
of the project, the project funder(s), and the readiness of your organization for evaluation, or the
organizational policies or procedures with which the project might be associated. Other ethical
considerations could be related to the project participants or the tools you use for gathering data.

Examples of ethical considerations when conducting evaluations

 Disclose any conflict of interest that you or any member of the evaluation group may
have.
 Clarify your staffs and your own credibility and competence in undertaking the
evaluation. Anticipate your collective shortcomings, and ask for solutions and help to
mitigate them.
 Be aware of any substantial risks that this evaluation may pose for various stakeholders
and discuss them with the evaluation group.
 Remain unbiased and fair in all stages of evaluation. Make sure that your personal
opinions toward a group, topic, or social matter will not interfere with your evaluation
work.
 Be ready to negotiate when dealing with various stakeholders and their expectations.
 Be clear and accurate in reporting the evaluation results, and explain the limitations of the
work and recommendations for improvements.

Examples of ethical considerations when collecting data


 Inform participants about the purpose of data gathering and how you will use and analyze
data.
 Tell participants how the results will be used and any potential negative consequences.
 Explain to participants about data privacy and confidentiality and how it will be
protected.
 Obtain consent forms if you think that identifying a respondent might be necessary.
 When analyzing or reporting the qualitative data, be careful about sensitive comments or
those that may reveal personal identities.
 Obtain necessary permission when approaching children, institutions (e.g., hospitals,
universities], or other sensitive groups for data.
 Understand the participants' cultural norms before approaching them for data.
 Consider offering incentives for participants - both people and organizations - such as
providing some feedback or a summary of the evaluation results.

19

PHT 422: Health Programme Monitoring and Evaluation David Masinde


20

PHT 422: Health Programme Monitoring and Evaluation David Masinde


CHAPTER THREE: EVALUATION INDICATORS
Indicators are measurable factors or evidence that shows the extent of the project's progress,
success, or achievements. Identifying indicators can help you in collecting useful data and in
your search for required evaluation tools and information sources. Indicators are succinct
measures that aim to describe as much of a program in a few points. They help us understand a
program and enable us to compare it and improve it where possible for accountability.
Four Things to Know about Indicators
 Indicators only indicate
 Indicators encourage explicitness
 Indicators usually rely on numbers and numerical techniques
 Indicators should not just be associated with fault-finding
Role of an indicator

 Demonstrate change (or lack of) in key program/project area(s)


 Track trends over time.
 Provide evidence of achievement (or lack of) results and activities.
Why are indicators important?

 Indicators enable you to reduce a large amount of data down to its simplest form (e.g.
percent of clients who tested after receiving pre-test counseling, prevalence rate).
 When compared with targets or goals, indicators can:
 signal the need for corrective management action,
 evaluate the effectiveness of various management actions, and
 provide evidence as to whether objectives are being achieved
Expressing indicators
An indicator is usually expressed in numerical form:
 number
 ratio
 percentage
 average
 rate
 index (composite of indicators)
An indicator can also be expressed in non-numerical form such as in words
Non-numerical indicators are also referred to as qualitative or categorical indicators
Characteristics of a Good Indicator

 Validity: Measures in fact what it intends to measure conceptually


 Reliability: Minimizes measurement error
 Precision: Is operationally defined in clear terms
 Independence: Non-directional and unidimensional, depicting a specific, definite value at
one point in time

 Timeliness: Provides a measurement at time intervals relevant and appropriate in terms of


program goals and activities
 Comparability: Generates corresponding or parallel values across different population
groups and program approaches
Rationalization and Selection of Program Indicators

 Prepare and update list of program indicators to track each level of result
 Identify data requirements and sources for each indicator
 Limit number to program management and information reporting obligation needs

How to identify evaluation indicators:


1) Review the project objectives and think of the information and evidence you need to
demonstrate the achievement of each one.
2) Review the evaluation questions and thinks of the information you need to answer each
question.
3) Review the project activities and look for any measurable factor indicating each activity's
progress.
4) Review the anticipated project outcomes and think of the information and evidence that
ensure those outcomes occur, or indicate the efforts in moving toward them.
21

PHT 422: Health Programme Monitoring and Evaluation David Masinde


5) Review the project outputs and determine how they can represent the project's progress
and achievements.
6) Specify any evidence for the project claims or achievements.

Measure of Quality, quantity, Time and location (QQTL)


Indicators are measured in terms of quality, quantity and time (and sometimes
place and cost). Putting numbers and dates on indicators is called targeting.
Number of indicator required
The fewer the better. Use only the number of indicator s required to clarify what
must be accomplished.

How to construct objectively verifiable indicators (OVI)


Begin with the basic indicator. Make sure it is numerically quantifiable and then
add the quality and then the time dimensions.

(Quantity+quality+time=QQT)

As an example

Step1: Basic indicators

Rice yields of small scale farmers increased.

Step2: Add Quantity

Rice yields of small scale farmers increased by x bushels

Step3: Add Quality

Rice yields 9of same quality as 1997 crop) of small scale farmers (owing 3
hectares or less) increased by X bushels

Step4: Add time

Rice yields (of same quality as 1997 crop) of small scale farmers (owing 3 hectares
or less) increased by X bushels by the end of 1998 harvest.

Category of indicators

1. Goal Level Indicators

Often describe program or sector objectives to which this project and several
others are directed. For this reason, the Goal level Indicators may include targets
beyond the scope of this project, such as small farmer income increased where
farmer income may be increased by the combined outcomes of several projects.

2. Purpose Level Indicators-The project purpose is the one many reasons why
you are doing the project. It is why you are producing outputs.
3. Output Level Indicators-By definition, these indicators establish the terms
of reference for the project if a project team or contractor is responsible for
all the outputs then these indicators define the deliverables for which the
contractor is accountable.

22

PHT 422: Health Programme Monitoring and Evaluation David Masinde


4. Activity Level Indicators-The OVI at the Activity Level are usually inputs
or the budget. Often this will look like a performance budget, since costs can
be related directly to activities.
Indicators can also be categorized as:
 Quantitative, such as the number of participants, number of website visits, and rate or
rank of opinions.
 Qualitative, such as positive or negative feedback, problems, complaints, and comments.
 Output as indicators if they show the project's progress toward an objective.

Examples of evaluation indicators


1) Quantitative indicators
 Response rate to an advertisement, announcement, etc.
 Number of visits to the project website
 Number of inquiries
 Participants' level of satisfaction or engagement (e.g., 1 to 4 scales)
 Frequency of communications
 Number of resources used
 Percentages related to the use of various services
 Average age or education of respondents to an advertisement
 Knowledge test scores or ranks
2) Qualitative Indicators
 Types of responses to an advertisement, announcement, etc.
 Types of inquiries
 Feedback on the effectiveness of services, benefits of a program, comprehensiveness of
materials, etc.
 Observable changes in attitudes, behaviours, skills, knowledge, habits, etc.
 Types of communications
 Types of problems, complaints about services, programs, etc.
 Types of resources used
 Participants' perceptions of the project programs, services, etc.

3) Output Indicators
 Number of workshops held
 A Volunteer Fair held
 Number of volunteers trained
 Number of charitable or nonprofit organizations engaged
 A published manual
 Website
 Training tool kit or workshop tool kit

23

PHT 422: Health Programme Monitoring and Evaluation David Masinde


CHAPTER FOUR: EVALUATION FRAMEWORKS/DESCRIPTION MODELS

A framework is a structure for supporting or enclosing something else, especially a skeletal


support used as the basis for something being constructed.
A set of assumptions, concepts, values, and practices that constitutes a way of viewing reality.
A simplified description of a complex entity or process- synonymous with the term model
Types of Frameworks
a) Conceptual Frameworks
b) Results Framework
c) Logical Framework
d) Logic Model

 Conceptual frameworks
Conceptual, or “research”, frameworks (models) are diagrams that identify and illustrate the
relationships among systemic, organizational, individual, or other salient factors that may
influence program/project operation and the successful achievement of program or project goals.

Example of HIV/AIDS conceptual Framework

Underlying Proximate Biological Health Demographic


determinants determinants determinants outcome outcome

New C Rate of
Partner Contact of
acquisitio susceptible
n to infected
Mixing persons HIV
Condom
patterns incide
use B
Concurre Efficiency
nce Mortali
Concurrent
ncy of
STI ty
Abstinenc
Risky transmissio
e
sexual n STI
practices per contact inciden
Chemothera ce
Treatment D Duration
py
with ARV, of
Treatment infectivity
of
opportunisti
Source: Boerma, and Weir 2005.
c infections

Role of Conceptual Frameworks


 Derive program Goals,
 Known or expected relationships among program and environmental factors that may
affect the effectiveness of the activities or the outcome of the intervention
 Development of operational plans
 Clarifying the program’s Assumptions

Goals and Objectives


 Goal: a broad statement of a desired, long-term outcome of the program
 Objectives: statements of desired, specific, realistic and measurable program results
Characteristics of Objectives - SMART
 Specific: identifies concrete events or actions that will take place
 Measurable: quantifies the amount of resources, activity, or change to be expended and
achieved
 Appropriate: logically relates to the overall problem statement and desired effects of the
program
 Realistic: Provides a realistic dimension that can be achieved
24

PHT 422: Health Programme Monitoring and Evaluation David Masinde


with the available resources and plans for implementation
 Time-based: specifies a time within which the objective
will be achieved
Examples of Goals
National AIDS Control Council (NACC) 2005-2010 Strategic Plan
Goal (s):
 Reduce the spread of HIV,
 improve quality of life of those infected and affected and
 mitigate the socio-economic impact of the epidemic

Objectives
 Objective 1: Number of new Infections reduced
 Objective2: Improved health & quality of life of
people infected & affected by HIV/AIDS
 Objective 3: Strengthened capacity of NACC & stakeholders to respond to the HIV/AIDS
epidemic at all levels through improved research, M&E and improved management &
coordination

PEPFAR Goals: 2008


 Treating 2 million HIV+ people by 2008
 Preventing 7 million new infections
 Caring for 10 million HIV infected and affected individuals by 2008 (including orphans
and vulnerable children)

 Results Framework
Results frameworks are diagrams that identify steps, or levels, of results, and illustrate the
causal relationships linking all levels of a program’s objectives. Other terms used: Strategic
frameworks. It focus on the end result(s) and the strategies that we can use to achieve them
Identifies the logic and links behind programs and to identify necessary and sufficient elements
for success
Elements of Results Frameworks
 Goal Statement— the change in health conditions that we hope to achieve
 Strategic (or Key) Objective (SO)—the main result that will help us achieve our goal and
for which we can measure change
 Intermediate Results (IRs)—the things that need to be in place to ensure achievement of
the SO
 Strategies & Activities —what a project does to achieve its intermediate results that
contribute to the objective

Examples of Goals and Related Strategic Objectives


 Goal: Reduce child mortality
– SO: Increased use of child preventive health behaviors (E.g. PMTCT)
 Goal: Improved adolescent health
– SO: Increased use of risk reduction behaviors among adolescents (use of
condoms, abstinence)
 Goal: Reduce the spread of HIV/AIDS (we don’t measure)
– SO: Increased Condom use
 Goal: Improve quality of life of those infected by HIV/AIDS
– SO: Increased availability and access to treatment and care (e.g. ART, HBC)

General Characteristics of an Intermediate Result

 Statement of a desired outcome or a situation that changed as a result of project


intervention—not an activity or process
– ART services are accessible to target population
– VCT services are accessible to young people
 This outcome contributes to our ability to get to our SO (e.g., use of HIV/AIDS services).
 The result is measurable.

25

PHT 422: Health Programme Monitoring and Evaluation David Masinde


Strategies and Activities
These are things the project does in order to achieve the desired outcomes or changes in the
situation. For example:
– Advocate for Counseling and Testing
– Strengthen HIV and AIDS supply chain
– Train Peer counselors and educators
– Strengthen FP logistic systems
– Advocate for contraceptive supply

Common Difficulties in Formulating Results Frameworks and Program Design


 Mixing up results, strategies, and activities
 Starting Project Design with a list of activities that may not logically lead to the desired
objective
 Choosing indicators that truly measure the results (we will get to this later on)

Example of a Results Framework Application

Donor/USAID Reproductive Health Program

SO1: Increased Utilization of Family Planning/Reproductive Health Services

IR1 Strengthened sustainability IR2 Expansion of high quality FP/RH


of FP/RH Program services in the public and private sectors

IR1.1 Improved policy IR2.1 Increased availability of


postpartum and postabortion FP services
environment for the provision
of FP/RH services in the public
and private sectors

IR1.2 Strengthened NGO advocacy IR2.2 Increased accurate knowledge


modern methods of clients about and FP services

for FP program
IR2.3 Improved job performance of
health providers, trainers, and administrators

 Logical Frameworks
A logical framework (LogFRAME) is a management tool for strategic planning and
program/project management. It looks like a table (or framework) and aims both to be logical to
complete, and to present information about projects in a concise, logical and systematic way.
A LogFRAME summarizes, in a standard format:
 What your project is trying to achieve
 How it aims to do this
 What is needed to ensure success
 Ways of measuring progress and the potential problems along the way

Purposes of logFrame:
 Summarizes what the project intends to do and how
 Summarizes key assumptions
 Summarizes outputs and outcomes that will be monitored
and evaluated

26

PHT 422: Health Programme Monitoring and Evaluation David Masinde


Explanation of components in a logframe

 Goal is an objective greater than that of the program itself -- IMPACT


Ex. Reduction in HIV/AIDS prevalence
 Purpose is the objective to be reached by implementing the program and is likely to
outlive the program -- EFFECT
Ex. Increase in use of condoms in non-marital sex acts
 Outputs are the “products” or “deliverables” of the activities undertaken -- OUTPUT
EX. Number of midwives trained and supplied with safe delivery kits, no. of clients
served,
 Activities are those things which must be done to achieve the outputs --
INPUT/PROCESS
Ex. Creating condom dist. points, upgrading clinics

Logical Framework Matrix Used In Project Design and Monitoring


Table that summarizes the final design of a project and usually comprises of 16 frames organized
under four major headings (project design summary which constitute project goal, purpose,
outputs and inputs (see table 4 below]

Table 2.4 Log Frame Matrix


Project summary Objective verifiable Means of verification/ Assumptions and Risks
indicators / monitoring (what will be taken for
performance target mechanism granted)
(indicators of change) (how change will be
measured)
Goal: (State the
goal)
Objective: (state
the objective)
Output 1
Activity 1.1
Activity 1.2
Activity 1.3

Output 2
Activity 2.1
Activity 2.2
Activity 2.3
Activity 2.4

Inputs
Cost/
Unit

Description

Input 1

Input 2

Input 3

27

PHT 422: Health Programme Monitoring and Evaluation David Masinde


Project Description Performance Indicators Means of Verification Assumptions

Goal: The broader Measures of the extent to Sources of


development impact to which a sustainable information and
which the project contribution to the goal methods used to
contributes - at a national has been made. Used collect and report it.
and sectoral level. during evaluation.

Purpose: The development Conditions at the end of Sources of Assumptions


outcome expected at the the project indicating that information and concerning the
end of the project. All the Purpose has been methods used to purpose/goal
components will achieved and that benefits collect and report it. linkage.
contribute to this are sustainable. Used for
project completion and
evaluation.

Component Objectives: The Measures of the extent to Sources of Assumptions


expected outcome of which component information and concerning the
producing each objectives have been methods used to component
component's outputs. achieved and lead to collect and report it. objective/purpose
sustainable benefits. Used linkage.
during review and
evaluation.

Outputs: The direct Measures of the quantity Sources of Assumptions


measurable results (goods and quality of outputs and information and concerning the
and services) of the the timing of their methods used to output/component
project which are largely delivery. Used during collect and report it. objective linkage.
under project monitoring and review.
management's control

Activities: The tasks Implementation/work Sources of Assumptions


carried out to implement program targets. Used information and concerning the
the project and deliver the during monitoring. methods used to activity/output
identified outputs. collect and report it. linkage.

Example of a logical framework in practice:

Taskforce on Communicable Disease Controlling the Barents and Baltic Sea Regions:
Tuberculosis

28

PHT 422: Health Programme Monitoring and Evaluation David Masinde


GOAL PERFORMANCE INDICATORS MEANS OF ASSUMPTIONS
A. Reduced A. Notification rate VERIFICATION
burden of TB B-1. Treatment outcome A. Annual - A dual HIV/TB
to reach notification epidemic causing
European B-2. Prevalence of Multi-drug reports
Resistance in “new” and increase in TB
average (surveillance incidence does
levels previously treated TB )
patients not occur
B. Further B-1. Annual reports on
development - Control of
outcome of
of multi-drug private
treatment (cohort
resistant TB practitioner and
analysis)
(MDR-TB) pharmaceutical
prevented B-2. Periodic reports on sectors to prevent
surveillance of anti- MDR
TB drug resistance
- Prevalence of
resistance to
second line anti-
TB drugs low
enough at the
outset so as not to
seriously
compromise
treatment success
ratio

PURPOSE PERFORMANCE MEANS OF ASSUMPTIONS


INDICATORS VERIFICATION
[Implementing] cost- 1. Coverage of TB programmes in 1. Annual - Stable political
effective measures for line with international reports situation, sustained
the prevention and recommendations. 2. Annual political
control of TB 2. Proportion of patients defaulting reports commitment and
operating within civil out of patients treated. 3. National financing
and penitentiary / local - Sufficient
health services in the 3. Proportion of previously treated annual
cases among all cases. numbers of
Task Force area notificati competent health
4. Proportion of patients on on care personnel in
ambulatory treatment out of all reports the government
patients treated. (surveill sector
ance)
4. Annual record
reviews during site
visits (consecutive
series of patients)

OUTPUTS PERFORMANCE MEANS OF ASSUMPTIONS


INDICATORS VERIFICATION
8. Measures to increase 8.1. Material produced / 1. Relevant
awareness of TB and its 8.1. Number of distributed persons
treatment among all pamphlets / posters 8.2. KAP or other surveys motivated
members of the community printed and (before/after) to
developed and tested distributed annually participat
8.2. Awareness of TB e
among target 2. Professio
groups nal
interest,
sufficient
financing
3. Target
groups
interested
29

PHT 422: Health Programme Monitoring and Evaluation David Masinde


in (their)
health
and able
to
participat
e
ACTIVITIES INPUTS MEANS OF ASSUMPTIONS

VERIFICATION

8.1 Identify groups at risk for


TB
Financial management
8.2 Develop advocacy material reports
suitable for all target groups
(not only risk groups)

8.3 Organize health education


directed at all target groups

8.4 Involve the mass media

30

PHT 422: Health Programme Monitoring and Evaluation David Masinde


Steps for Constructing a Logical Framework
Work through the following basic sequence in developing a project design using the logical
framework. Throughout its development always follow the underlying principles of working
from the general to the specific.

Step 1: defines the overall Goal to which your project contributes


In the first phase of logical framework development you should prepare a general
description, or narrative summary for the project.

Step 2: Define the purpose to be achieved by the project


You should normally have only one purpose in a project. The reason for this is very practical.
Experience has shown that it is easier to focus projects outputs on a single purpose. If you
have several purposes, efforts become diffused and the design is weakened.

Step 3: Define the Outputs for achieving this Purpose

Step 4: Define the Activities for achieving each Output


Remember that project management involves carrying out certain activities. You must
include these activities in your logical framework. Provide a summary schedule of periodic
meetings, monitoring events and evaluations. Some planning teams emphasize these
activities by including an initial Output called 'Project Management System Installed and
Operational'.

Step 5: Verify the Vertical Logic with the If/Then Test


In a well-planned logical framework, at the lowest levels on the logical framework you can
say that if certain activities are carried out you can expect certain outputs to result. There
should be the same logical relationship between the Outputs and the Purpose, and between
the Purpose and the Goal.

As an example, you could argue that if you achieve the output to supply farmers with
improved seed then the Purpose of increased production will be seen.
As you make the cause and effect linkages between objectives at the different levels stronger,
your project design will be improved.
The logical framework forces you to make this logic explicit. It does not guarantee a good
design because the validity of the cause and effect logic depends on the quality and
experience of the design team.

Step 6: Define the Assumptions related to each level


The assumptions may describe important natural conditions such as 20 centimeters of rain
falling between May and October. They may be human factors such as no labour strikes
during start up of project, timely release of budget, farmers willing to try new methods, such
as crop prices remaining stable. They may relate to other projects that must be carried out in
conjunction with this project, like a World Bank Irrigation project remaining on schedule, or
UN fertilizer project completed by start up.
The narrative summary describes the IF/THEN logic that is the necessary conditions linking
each level. Assumptions complete the picture by adding the "if /and /then" logic. They
describe conditions which are needed to support the cause and effect link between each level.
They are also known as the sufficient conditions.

31
PHT 422: Health Programme monitoring and Evaluation David Masinde
If cause and effect is the core concept of good project design, necessary and sufficient
conditions are the corollary. The necessary conditions describe the cause and effect
relationship between the Activity- to Output, Output-to-Purpose and the Purpose-to-Goal
objectives for accomplishing project objectives. This is the internal logic, but it does not
define the different conditions at each level for accomplishing the next higher level.

The importance of clarifying assumptions


These are hypothesis for the project success. These are factors which are outside the control
of the project but which nevertheless influences the cause effect relationships integral to
project design.
Assumptions are external conditions over which the project chooses not to exert or does not
have control but on which the accomplishment of objectives depends.
You can determine the assumptions by asking. What conditions must exist in addition to my
objective (activity, output, purpose, or goal levels) in order to achieve the next level?'
Important areas where assumption may influence the project includes:
 Market conditions/price
 Macroeconomic policies e.g. fiscal policies and monetary
 Political and social conditions
 Sector policy and conditions
 Environmental conditions
 Private sector capability
 Government administrative capability
 Community and other development support partners
 Donors and other funding agencies etc

Ways of Managing Assumptions


i. Do nothing if the assumptions are not serious enough to threaten the project
ii. Change the project design i.e. change outputs or inputs when the assumptions are
too risky
iii. Add new project to minimize the effects of risk especially when dealing with
environmental related risks
iv. Abandon the project when risk is too great that one cannot bear

Step 7: Define the Objectively Verifiable Indicator (OVI) at Goal then Purpose then
Output then Activity levels.

The necessary and Sufficient Test.


The OVIs tell us not only what accomplishment is necessary, but also what will be sufficient
performance to assure that we can reach the next level of objective. For this reason it is best
to begin at the end. That is, being with the higher order objectives and work backwards
through the causal chain: Goal then Purpose, then Outputs then Activities.

Quantity, Quality and Time (QQT)


Normally you will state indicators in terms of quantity, quality and time (and sometimes
place and cost). Putting numbers and dates on indicators is called targeting. Although it is
often claimed that higher order objectives are not measurable, this is not true. We may
choose not to put targets on them but we can give all the Goal, Purposes and Outputs
indicators and targets. Number of indicators required

32
PHT 422: Health Programme monitoring and Evaluation David Masinde
The fewer the better. Use only the number of indicators required to clarify what must be
accomplished to satisfy the objectives stated in the Narrative Summary column.

Step 8: Define the Means of Verification (MoV)


The rule is that the indicators you choose for measuring your objectives must be verifiable by
some means. If they are not, you must find another indicator.

Step 9: Prepare the Performance Budget


You have already seen that the OVIs at the activity level are usually the inputs or the budget.
Now you need to prepare the full performance budget. Relate the costs directly to the
activities. You may need to use a set of standard categories to meet the requirements of the
agency you are working for.

Step 10: Check the Logical Framework using the Project Design Checklist
Work through the project design checklist as an aid to ensuring that your project meets all the
requirements of a well designed Logical framework.
Step 11: Review the Logical Framework Design in the light of previous experience
You should have been thinking about your previous experience of projects throughout the
preparation for the logical framework.
Advantages of Logical Frame Work Analysis
The major advantages of the Logical Framework approach are:
1. It brings together in one place a statement of all key components of the project or
programme.
2. It meets the requirement of good project design and enables possible responses to past
weakness in many designs.
3. It is easy to learn and use.
4. It does not add time or effort to project management but reduces it.
5. It anticipates implementation.

6. It sets up a framework for monitoring and evaluation where planned and actual results
can be compared.

7. It assists communication between project donors and implementers.

Limitations of the Logical Framework Approach


1. It is not a substitute for other technical, economic, social and environmental analysis.
It cannot replace the use of professionally qualified and experienced staff

2. Rigidity in Project management may arise when objectives and external factors
specified during design are over emphasized.
3. It requires a team process with good leadership and facilitation skills to be most
effective.

4. The process requires strong facilitation skills to ensure real participation by


appropriate stakeholders.

33
PHT 422: Health Programme monitoring and Evaluation David Masinde
Logic Models

A logic model describes the main elements of a


program and how they work together.
This model is often displayed in a flow chart, map,
or table to portray the sequence of steps leading
to program outcomes.

Example of a Format of Logic Model

Problem Statement

Implementation
Input Activities Outputs

Outcomes

Impacts

 Inputs: Resources used in an program, such as money, staff, curricula, and materials.
– GAP, government, & other donor fundp
– C&T personnel
– VCT protocols and guidance
– Training materials
– HIV test kits
 Activities: Services that the program provides to accomplish its objectives, such as
outreach, materials distribution, counseling sessions, workshops, and training.
– Train C&T personnel and site managers
– Provide pre-test counseling, HIV tests, post-test counseling
 Outputs: Direct products or deliverables of the program, such as intervention
sessions completed, people reached, and materials distributed.
– # personnel certified
# clients receiving pre-test counseling, HIV tests, post-test

 Outcomes: Program results that occur both immediately and some time after the
activities are completed, such as changes in knowledge, attitudes, beliefs, skills,
behaviors, access, policies, and environmental conditions.
– Quality of VCT improved

34
PHT 422: Health Programme monitoring and Evaluation David Masinde
– Access to VCT increased
– Clients develop & adhere to personalized risk-reduction and treatment
strategy

 Impacts: Long-term results of one or more programs over time, such as changes in
HIV infection, morbidity, and mortality
– HIV transmission rates decrease
– HIV incidence decreases
– HIV morbidity and mortality decrease

Example of logic model for TB:

Logic Model for TB


Portion of model for tuberculosis control relating to increasing demand for quality
services

INPUT
Human and financial resources to develop and print educational brochure

PROCESS
Distribute brochure to health facilities
Meet with physicians to promote distribution of brochure

OUTPUT
• Brochure distributed to clients of facilities

OUTCOME
• Increased customer knowledge of TB transmission and treatment
• Increased demand for quality TB services

IMPACT
• Decreased TB infection, morbidity and mortality

A Good Logic Model - characteristics

 Includes a problem statement, inputs, activities, outputs, outcomes, and impacts


 Reflects agreement among major stakeholders about intended implementation and
outcomes (planned logic model)
 Illustrates clear, sequential, and logical linkages between each part of the logic model
 Contains a problem statement that identifies underlying causes
 Includes outcomes responsive to the issues identified in the problem statement
 States outcomes as changes in knowledge, attitudes, beliefs, intentions, skills,
behaviors, access, policies, or environmental conditions
 Includes outcomes that are realistic for the stated activities

35
PHT 422: Health Programme monitoring and Evaluation David Masinde
 States outcomes that are within the scope of the program’s influence

36
PHT 422: Health Programme monitoring and Evaluation David Masinde
CHAPTER FIVE: EVALUATION DESIGNS
Designs aims at describing the operations of the programme and its immediate results
(outputs) and challenges. Evaluation design is a plan for evaluation and is linked to
evaluation questions and consists of
– Methods for addressing them
– Data collection
– Analysis

Process Evaluation Designs


• One shot
• Cross-sectional
• Before and after without control
• Time series
• Case studies
 One Shot Design
• Collects data from service recipients
• Determines numbers reached
• Perceptions on services
• X O1
 Cross Sectional Design
• Collects data using survey method
• Can be used to determine satisfaction of the intervention or reasons for not using
Users (males vs females
• X O1
O2
O3
 Before-and-after without control design:
In such a design a single test group or area is selected and the dependent variable is
measured before the introduction of the treatment. The treatment is then introduced and
the dependent variable is measured again after the treatment has been introduced. The
effect of the treatment would be equal to the level of the phenomenon after the treatment
minus the level of the phenomenon before the treatment.

 Trend Series Design

• Establishes whether the trend changed after intervention


• Involves tracking the measurement of variables for a period of time.
• o1 o2 o3 X o1 o2 o3

 Case Study Design


• Collects data over time to understand programme
• Involves establishing which changes have occurred and why.

Experimental designs can also be applied in evaluation.


Two-group simple randomized design: In a two-group simple randomized design, first of
all the population is defined and then from the population a sample is selected randomly.

37
PHT 422: Health Programme monitoring and Evaluation David Masinde
Further, requirement of this design is that items after being selected randomly from the
population, be randomly assigned to the experimental and control groups (Such random
assignment of items to two groups is technically described as principle of randomization).
Thus, this design yields two groups as representatives of the population. In a diagram form
this design can be shown in this way Since in the sample randomized design the elements
constituting the sample are randomly drawn from the same population and randomly
assigned to the experimental and control groups, it becomes possible to draw conclusions on
the basis of samples applicable for the population.

SAMPLING DESIGN

Sampling may be defined as the selection of some part of an aggregate or totality on the basis
of which a judgement or inference about the aggregate or totality is made. In other words, it
is the process of obtaining information about an entire population by examining only a part
of it. In most of the research work and surveys, the usual approach happens to be to make
generalizations or to draw inferences based on samples about the parameters of population
from which the samples are taken.

Sampling design: A sample design is a definite plan for obtaining a sample from the
sampling frame. It refers to the technique or the procedure the researcher would adopt in
selecting some sampling units from which inferences about the population is drawn.
Sampling design is determined before any data are collected.

Characteristics of a good sample design


 Sample design must result in a truly representative sample.
 Sample design must be such which results in a small sampling error.
 Sample design must be viable in the context of funds available for the research study.
 Sample design must be such so that systematic bias can be controlled in a better way.
 Sample should be such that the results of the sample study can be applied, in general,
for the universe with a reasonable level of confidence.

Types of sample designs


Sample designs are basically of two types viz.
 Non-probability sampling and
 Probability sampling.
 Non-probability sampling:
Non-probability sampling is that sampling procedure which does not afford any basis for
estimating the probability that each item in the population has of being included in the
sample. Non-probability sampling is also known by different names such as deliberate
sampling, purposive sampling and judgement sampling
They include
1) CONVENIENCE SAMPLING: Respondents are chosen for convenience and availability;
because each member of the target population does not have an equal chance of being
selected for the sample, there is no way to know if the results of the survey can be
generalized to the target population.

2) SNOWBALL SAMPLING: This technique involves asking individuals who have already
responded to a survey to identify additional respondents, and is useful when the

38
PHT 422: Health Programme monitoring and Evaluation David Masinde
members of a population are hard to reach or identify (e.g., people who participate in
a particular activity, members of a particular organization). You can use this
technique in conjunction with either random or convenience sampling. This technique
also results in a sample that does not represent the entire population.

3) Quota sampling

4) Purposive

 Probability sampling:
Probability sampling is also known as ‘random sampling’ or ‘chance sampling’. Under this
sampling design, every item of the universe has an equal chance of inclusion in the sample.
They include:
 Systematic sampling: In some instances, the most practical way of sampling is to
select every ith item on a list. Sampling of this type is known as systematic sampling.
An element of randomness is introduced into this kind of sampling by using random
numbers to pick up the unit with which to start. For instance, if a 4 per cent sample is
desired, the first item would be selected randomly from the first twenty-five and
thereafter every 25th item would automatically be included in the sample. Thus, in
systematic sampling only the first unit is selected randomly and the remaining units
of the sample are selected at fixed intervals. Although a systematic sample is not a
random sample in the strict sense of the term, but it is often considered reasonable to
treat systematic sample as if it were a random sample.
 Stratified sampling: Under stratified sampling the population is divided into several
sub-populations that are individually more homogeneous than the total population
(the different sub-populations are called ‘strata’) and then we select items from each
stratum to constitute a sample. Since each stratum is more homogeneous than the total
population, we are able to get more precise estimates for each stratum and by
estimating more accurately each of the component parts; we get a better estimate of
the whole.
 Cluster sampling: If the total area of interest happens to be a big one, a convenient
way in which a sample can be taken is to divide the area into a number of smaller
non-overlapping areas and then to randomly select a number of these smaller areas
(usually called clusters), with the ultimate sample consisting of all (or samples of)
units in these small areas or clusters. Thus in cluster sampling the total population is
divided into a number of relatively small subdivisions which are themselves clusters
of still smaller units and then some of these clusters are randomly selected for
inclusion in the overall sample.
 Area sampling: If clusters happen to be some geographic subdivisions, in that case
cluster sampling is better known as area sampling. In other words, cluster designs,
where the primary sampling unit represents a cluster of units based on geographic
area, are distinguished as area sampling.
 Multi-stage sampling: Multi-stage sampling is a further development of the principle
of cluster sampling. Suppose we want to investigate the working efficiency of
nationalized banks in India and we want to take a sample of few banks for this

39
PHT 422: Health Programme monitoring and Evaluation David Masinde
purpose. The first stage is to select large primary sampling unit such as states in a
country. Then we may select certain districts and interview all banks in the chosen
districts. This would represent a two-stage sampling design with the ultimate
sampling units being clusters of districts. If instead of taking a census of all banks
within the selected districts, we select certain towns and interview all banks in the
chosen towns. This would represent a three-stage sampling design. If instead of
taking a census of all banks within the selected towns, we randomly sample banks
from each selected town, then it is a case of using a four-stage sampling plan. If we
select randomly at all stages, we will have what is known as ‘multi-stage random
sampling design’.

40
PHT 422: Health Programme monitoring and Evaluation David Masinde
CHAPTER 4:DATA COLLECTION IN MONITORING AND EVALUATION

Table 4.1: Tolls and techniques for gathering Information


Tool and description Usefulness Disadvantages
techniques
Interviews These can be structured, Can be used with Requires some skill
Semi-structured or almost anyone who in the Interviewer.
unstructured (see has some For more on
Glossary of Terms). Involvement with Interviewing skills
They involve asking the project. Can be see later in this
specific questions aimed done in Person or on toolkit.
at getting information the telephone or
that will enable even by email. Very
indicators to be flexible.
measured. Questions can
be open-ended or Closed
(yes/no answers). Can
be a source of
qualitative and
quantitative information.
Key informant These are interviews that As these key Needs a skilled
interview are carried out with informants often interviewer with a
specialists in a topic or have little to do with good Understanding
someone who may be the project or of the topic. Be
able to shed a particular organisation, they careful not to turn
light on the process. can be quite something into an
objective and offer absolute truth (cannot
Useful insights. be challenged)
They can provide because it has been
something of the big said by a key
picture" where Informant.
people more
involved may focus
at the micro (small)
level.
Questionnaire These are written This tool can save With people who do
questions that are lots of time if it is not read and write,
used to get written self completing, someone has to go
responses which, enabling you to get through the
when analysed, will to many people. questionnaire with
enable indicators to them which means no
be measured Done in this way it time is saved and the
gives people a numbers one can
feeling of anonymity reach are limited.
and they may say With questionnaires,
things they would it is not possible to
not say to an explore what people
interviewer. are saying any

41
PHT 422: Health Programme monitoring and Evaluation David Masinde
further.
Questionnaires are
also Over-used and
people get tired of
completing them.
Questionnaires must
be piloted to ensure
that
questions can be
Understood and
cannot be
misunderstood. If the
questionnaire is
complex and will
need computerized
analysis, you need
expert help in
Designing.
Focus group In a focus group of This can be a useful It is difficult to do
discussion about six to 12 people way of getting random sampling for
are interviewed opinions from quite focus groups and this
together by a skilled a means findings may
interviewer/facilitator large sample of not be generalized.
with a carefully people Sometimes people
structured Interview influence one
schedule. another Either to say
Questions are usually something or to keep
focused around a quiet about
specific Topic or issue. Something. If
possible,
focus groups
interviews should be
recorded and then
transcribed. This
requires special
equipment and can
be
very time-consuming
Community This involves a Community Difficult to facilitate
meetings gathering of a fairly meetings are useful - Requires a very
large group of for getting a broad experienced
beneficiaries to whom response from many facilitator. May
questions, problems, People on specific require breaking into
situations are put for issues. small groups
input to help in It is also a way of followed by plenary
measuring Indicators. involving sessions when
beneficiaries everyone comes
directly in an together again.

42
PHT 422: Health Programme monitoring and Evaluation David Masinde
evaluation
process, giving them
a
sense of ownership
of
the Process. They
are
useful to have at
critical
points in
Community
projects.

Field worker Structured report forms Flexible, an Relies on field


reports that ensure that indicator extension of normal workers being
related questions are work, so cheap and disciplined and
asked and answers not time-consuming Insightful.
Recorded, and
observations recorded
on every visit.
Ranking This involves getting It can be used with Ranking is quite a
people to say what they individuals and difficult concept to
think is most useful, groups, as part of an get across and
most important, least interview schedule requires very careful
useful etc. or questionnaire, or explanation as well
as a separate as testing to ensure
session. Where that people
people cannot read understand what you
and write, pictures are asking. If they
can be used. misunderstand, your
data can be
completely distorted
Visual/audio These include pictures, Very useful to use You have to have
stimuli movies, tapes, stories, together with other appropriate stimuli
role plays, photographs, tools, particularly and
used to illustrate with people who the facilitator needs
problems or issues or cannot read or write. to be skilled in using
past events or even such stimuli
future events.
Rating scales This technique makes It is useful to You need to test the
use of a continuum, measure Attitudes, Statements very
along which people are opinions, carefully to make
expected to place their perceptions. sure that there is no
own feelings, possibility of
observations etc. People Misunderstanding. A
are usually asked to say common problem is
whether they agree when two concepts
strongly, agree, don't are included in the

43
PHT 422: Health Programme monitoring and Evaluation David Masinde
know, disagree, disagree statement and you
strongly with a cannot be sure
statement. whether
You can use pictures an opinion is being
and symbols in this given on one or the
technique if people other or both.
cannot read and write.
Critical This method is a way of Very useful when The evaluation team
event/incident focusing interviews with Something can end up
Analysis individuals or groups on problematic has submerged in a vast
Particular events or occurred and people amount of
incidents. The purpose feel strongly about contradictory detail
of doing this is to get a it. If all those and lots of "he
very full picture of what involved are said/she said". It can
actually happened. included, it should be difficult not to
help the evaluation take sides and to
team to get remain objective.
a picture that is
reasonably close to
what actually
happened and to be
able to diagnose
what went wrong
Participant This involves direct It can be a useful It is difficult to
observation observation of events, way of confirming, observe and
processes, relationships or otherwise, participate. The
And behaviours. Information process is very time-
"Participant" here provided in other consuming.
implies that the observer ways.
gets involved in
activities Rather than
maintaining a distance.
Self-drawings This involves getting Can be very useful, Can be difficult to
participants to draw particularly with explain and interpret.
pictures, usually of how younger Children.
they feel or think about
Something.

Example of monitoring mechanism


 An electronic filing system to organize all communications, reports, minutes of
meetings, and any other existing documents that can help you keep track of your
project activities.
 Document logs, including activity logs, to track the events and progress of the project
activities, and contact logs to record the time and details of contacts.
 Tracking software for project documents or recording website and other technology

44
PHT 422: Health Programme monitoring and Evaluation David Masinde
related project activities.
 Sources of Data
 Documents
 Recipients of services
 Programme staff
 Provision of Service

Review of documents as a secondary


– Programme document
– Work Plans
– Progress reports
– Project files
– Financial records
– Procurement records
– Minutes of meetings

– Service statistics
Planned inputs and activities outlined in the work plan are compared with monitoring
information
– e.g. Compare number of planned trainings with those actually undertaken
– Minutes of meetings :Looks at decisions on the programme
– Financial records :Compares budget with expenditures
– Discussion with management : Gathers qualitative data about the overview of
the programme design and implementation. Clarify issues that are not clear
from the documents
– Interview with staff: Collect various types of information to
 establish attitude and opinions on the programme and workload and to
get
 insight on the workings of the programme
– Observation of service: When done using a specified set of criteria, directly
observing the provision of services gives an indication of the quality of those
services
– Survey of beneficiaries: Collects data to determine types and perceived quality
of services received

45
PHT 422: Health Programme monitoring and Evaluation David Masinde
CHAPTER SIX: DATA CAPTURE AND MANAGEMENT

Data capture The process of converting data from paper form into a format that can be interpreted easily
Data capture Steps
 Receipt of forms
 Editing
 Querying
 Imputation
 Coding
 Conversion
 Verification
 Validation

 Receipt of forms
Evaluator should put strict reporting schedule in place. Schedule is used to check on:
 Promptness
 Completeness
 Assign an ID number to each form

 Editing
Procedure that ensures completeness and accuracy to detect missing, inconsistent,

inappropriate or obscure items.


 Querying
Procedure for seeking clarification on missing, inconsistent, inappropriate or obscure items.

 Imputation
Procedure of assigning the most probable value to an item whose exact value is unknown.
It’s only used as a last resort.
 Coding
Translation of an item into a numerical value.Various types of items
 Items with numeric values
 Pre-coded items“Unknown” or “not stated”
 Open-ended

46
PHT 422: Health Programme monitoring and Evaluation David Masinde
Item Code

Sex 1=male; 2=female

Age Actual age in years

Time spent in clinic 1=0-9; 2=10-19; 3=20-29; 4=30+


(in minutes)

Quality of service 1=poor to 10=excellent

Community participation

Material contribution 1=mentioned; 0=not

Involved in management mentioned

Nil 1=mentioned; 0=not


mentioned

1=mentioned; 0=not
mentioned

 Conversion
Converting data from paper-based forms onto electronic media
 Manual – keyboard data entry. Potential source of errors

 Automated – specific hardware and software. Can also be used for some editing,
coding and imputation
Data entry screen
An electronic replica of the form/questionnaire designed by use of software packages –
EPINFO, EPIDATA, SPSS. It should contain

47
PHT 422: Health Programme monitoring and Evaluation David Masinde
 Specify fields – numeric or alphanumeric
 Specify no. of characters for each variable
 Define each variable by a specific “variable name” - DICTIONARY
 Assign value labels to each variable
Controls
Create controls to conform to data requirements. Examples:
 Reject duplication
 Reject codes outside those specified
 Obey skip patterns
Data entry
Keying in data usually done manually. Its time consuming and a potential source of errors
hence the necessity for verification
 Verification
Manual conversion. Forms are independently keyed in and results compared to original set. If
discrepancy exceeds a pre-set limit, the whole set of forms to be keyed in afresh. Sample or
100% verification depends on the level of error revealed
 Validation
Process of reviewing data for range and illogical errors
 Number of responses for each variable to be consistent – otherwise countercheck with
original forms by use of ID numbers
 Where inconsistent, “clean” data (correct, impute, or delete)

DATA MANAGEMENT
Involves manipulating data in terms of:
 Sorting
 Indexing
 Creating subsets
 Grouping
 Creating new variables
 Merging files
The main goal of data management is to develop a comprehensive database that enables users
to readily access information
Data management Components

 Storage
 Create & save a file for the data
 Develop and maintain a clear catalogue of filing
 Prior to storage, scan for viruses
 Maintain backup copies of all data files
 Create working files & keep the original data set intact
 Data security – ensuring data are kept safe from:
 corruption
 theft
 misplacement
 destruction
 unauthorized access
 Updating/maintenance

48
PHT 422: Health Programme monitoring and Evaluation David Masinde
Occasioned by changes in the software or tools used. Maintain both copies (before and after
changes) with appropriate version numbers

 Retrieval Ensure easy retrieval


Incorporate appropriate restrictions to access data
 Password
 Read-only format
Data Analysis and Management

Aspect Database Spreadsheet Statistical

Ease of use ** *** **

Data entry *** ** *

Analysis * ** ***

Graphics * *** **

Management *** * ***

Key * Poor ** Good *** V. Good

49
PHT 422: Health Programme monitoring and Evaluation David Masinde
50
PHT 422: Health Programme monitoring and Evaluation David Masinde
CHAPTER SEVEN: DATA SOURCES & QUALITYASSURANCE
Definitions
 Data - raw facts that are collected and form the basis for what we know
 Information - the product of transforming the data by adding order, context, and
purpose
 Knowledge - the product of adding meaning to information by making connections
and comparisons and by exploring causes and consequences
 Sources of data
M&E systems incorporate data from more than one level of data collection (clients,
providers, facilities, population). Two categories of sources:
 Routine: Data collected on a continuous basis Examples:
◦ Facility-based data (service statistics)
◦ Community based data (service statistics)
◦ Vital registration
◦ Sentinel reporting (surveillance)

 Non-routine: Data collected on a periodic basis. Examples:


◦ Household or facility-based surveys
◦ Population census
◦ Special research
Data quality
Quality DOES NOT MEAN zero defects, “Quality” is relative – based on what is acceptable
or fit for the purpose – rather than a concept of absolute perfection . Data has quality if it
satisfies the requirements of its intended use. All features and characteristics of data that bear
on its ability to meet stated needs and expectations of the user.

Data quality dimensions


Quality data have 6 key attributes:
 Accuracy or validity
 Reliability
 Timeliness
 Completeness
 Precision
 Credibility or integrity
 Accuracy or validity
Data are correct and explicitly reflect the object or transaction it describes
 Reliability
Data are measured and collected consistently – based on protocols and procedures that do
not change according to who is using them, when or how often they are used
 Timeliness
Data are up-to-date (current) and available on time (available to the end-user when needed)
 Completeness
All the expected attributes are appropriately inclusive. Data represents the complete list of
all eligible persons or units
 Precision
Operationally defined in clear, understandable terms - have the expected sufficient detail
 Credibility or integrity
The degree to which users trust both the accuracy and reliability of data. Data should be

51
PHT 422: Health Programme monitoring and Evaluation David Masinde
protected from deliberate bias or manipulation (political or personal reasons).

What influences data quality?


 Poor data quality is more of a behavioural rather than a technology problem
 Lack of clear mechanisms for data ownership and accountability for data
quality
 Inability to address data problems due to lack of time, competing priorities,
inability to access correct information, lack of authority, or unavailable
administrative support
 Motivational issues such as the desire to focus on more challenging work or a
reluctance to expose data problems created by co-workers
 Organizational influences
 Available resources
 Structure of the programme
 Roles and responsibilities
 Organizational culture
 Behavioural influences
 Motivation
 Attitudes and values
 Confidence
 Sense of responsibility
 Technical influences
 Standard indicators
 Data collection tools
 Type of IT in use
 Adequacy and quality of staff
Types of errors in M/E
 Transposition
 When 39 is entered as 93
 Copying errors
 When 1 is entered as 7 or the number 0 is entered as the letter O
 Coding errors – entering the wrong code
 An interviewer circled 1 = Yes, but the coder copied 2 (which = No) during
coding
 Routing errors
 When a person filling out a form places the number in the wrong part or
wrong order
 Consistency errors
 When two or more responses on the same tool are contradictory - e.g. birth
date & age are inconsistent
 Range errors
 When a # lies outside the range of possible values
 Double counting errors
 Within partner double counting of individuals
 Between partner double counting of individuals
 Double counting of sites

Challenges in ensuring data quality

52
PHT 422: Health Programme monitoring and Evaluation David Masinde
 It requires cross-functional cooperation
 No specific unit or department feels it is responsible for the problem
 It requires the agency to acknowledge that poor data quality is a problem
 It requires discipline across the board
 It requires an investment of financial and human resources
 It is perceived to be extremely manpower-intensive
 The return on investment is often difficult to quantify and hence justify

Functional Areas of an M&E System that Affect Data Quality


SYSTEMS ASSESSMENT QUESTIONS BY FUNCTIONAL AREA
Functional Areas Summary Questions
I
M&E Capabilities, Roles and Are key M&E and data-management staff identified
Responsibilities 1
with clearly assigned responsibilities?

II Have the majority of key M&E and data-management


Training 2
staff received the required training?

III Has the Program/Project clearly documented (in


Data Reporting Requirements
3 writing) what is reported to who, and how and when
reporting is required?

IV
Indicator Definitions Are there operational indicator definitions meeting
4 relevant standards and are they systematically followed
by all service points?

V Are there standard data-collection and reporting forms


Data-collection and Reporting 5
that are systematically used?
Forms and Tools

Are source documents kept and made available in


6
accordance with a written policy?

VI Data Management Processes 7 Does clear documentation of collection, aggregation and


manipulation steps exist?
VII
Data Quality Mechanisms and 8 Are data quality challenges identified and are
Controls mechanisms in place for addressing them?
9
Are there clearly defined and followed procedures to
identify and reconcile discrepancies in reports?

53
PHT 422: Health Programme monitoring and Evaluation David Masinde
1 Are there clearly defined and followed procedures to
0 periodically verify source data?

1 Does the data collection and reporting system of the


VIII Links with National Reporting
System 1 Program/Project link to the National Reporting
System?

Addressing Data Quality Issues:


Requires a comprehensive data quality program built on four principles:
 Prevention
 Detection
 Correction
 Accountability
 Prevention
 Users need to understand the importance of data quality and take the appropriate
actions to ensure that data are entered correctly, completely and in a timely manner
 Detection
 Involves the passive and active monitoring of data to identify existing errors –
based on clear guidelines
 Passive monitoring occurs every time a user accesses a record - should review
relevant data for errors and ensure that all erroneous items are identified
 Active data monitoring occurs when the organization mandates periodic &
systematic data quality checks
 Correction
 Correcting data problems may be quick and painless or in extreme cases may
be a very lengthy, arduous process. Once an error has been detected, there are
four viable actions:
o Reject the error : When accuracy is more important than completeness or if
the problem with the data is deemed to be severe
o Accept the error: If the error is within tolerance limits set
o Correct the error: When the correct piece of data can be determined
o Apply a default value for the erroneous data: When completeness is very
important, then a default value is substituted for the erroneous data

Illustration – Systems’ Finding at the M&E Unit (HIV/AIDS)

54
PHT 422: Health Programme monitoring and Evaluation David Masinde
REPORTING
FINDINGS RECOMMENDATIONS
LEVEL

 No specific documentation
specifying data-management  Develop a data management
roles and responsibilities, manual to be distributed to
reporting timelines, standard all reporting levels
forms, storage policy, …

 Inability to verify reported  Systematically file all reports


National M&E numbers by the M&E Unit from Service Points
Unit because too many reports  Develop guidelines on how
(from Service Points) are to address missing or
missing (67%) incomplete reports

 Reinforce the need for


 Most reports received by the
documented review of
M&E Unit are not signed-off
submitted data – for
by any staff or manager from
example, by not accepting
the Service Point
un-reviewed reports

 Inability to retrieve source  Improve source document storage


Intermediate documents (i.e., treatment forms) process by clearly identifying
Aggregation Level for a specific period stored source document by date

 Confusion regarding the


 The M&E Unit should clearly
definition of a patient “lost to
communicate to all service points
Service Points follow-up” (3 months for
the definition of a patient “lost to
Temeke Hospital; 2 months for
follow up”
Iringa Hospital).

55
PHT 422: Health Programme monitoring and Evaluation David Masinde
 Develop a mechanism to ensure
 The service points do not
that patients “lost to follow up”
systematically remove patients
are systematically removed
“lost to follow up” from counts
from the counts of numbers of
of numbers of people on ART
people on ART

 Develop a coding system that


 In cases of "satellite sites“, the
clearly identifies a patient’s
reporting system and source
treatment location so that data
documents do not always
verification can be
identify the location of a patient
accomplished

Key Success Factors for Data Quality


1. Functioning information systems
2. Clear definition of indicators consistently used at all levels
3. Description of roles and responsibilities at all levels
4. Specific reporting timelines
5. Standard/compatible data-collection and reporting forms/tools with clear instructions
6. Documented data review procedures to be performed at all levels
7. Steps for addressing data quality challenges (missing data, double-counting, lost to
follow up)
8. Storage policy and filling practices that allow retrieval of documents for auditing
purposes (leaving an audit trail)

56
PHT 422: Health Programme monitoring and Evaluation David Masinde
CHAPTER EIGHT: DATA ANALYSIS AND INTERPRETATION
Data Analysis-Definition
Further mathematical calculation to produce statistics about tabulated data. Its Manipulation,
summarisation and interpretation of data. Involves converting data into intelligible
information such as averages, frequency tables, sums or other statistics
Role of Data Analysis in M&E
 Baseline surveys
 Reveal participants’ characteristics in terms of age, sex, residence, educational level,
marital status, etc
 Indicate frequency of specific behaviors, risks and protective factors
 Monitoring and process evaluation
 Reveal quality of program
 Coverage and exposure
 Program functions
 Outcome and impact evaluation
 Reveal if and how program achieved its intended results
 Reveal what portion of changes in outcome your program can take credit for
Data Analysis can enable the following comparisons to be done:
 Enable comparison between actual results vs program targets
 Actual progress to projected time frame
 Results across program sites
 Program outcomes vs control or comparison group outcomes

Data Analysis – Types


 Exploratory – Used when a programme is new & unclear what to expect from the
data
 Descriptive – Summarises findings and describes sample (most common)
 Inferential – Enables one to draw conclusions about the larger population from
which the sample was drawn

Measures of Central Tendency


Mean
 Known as average or arithmetic mean
 Derived by dividing sum of all observations with total sample size
 Example: The following figures represent earnings of five employees of an
organization: 2500, 3500, 5000, 6500, 7500
 Mean income=2500/5=5000
 Used for comparison with different sub-groups/sites
Mode
 Most frequent occurring measure/response in a set of data e.g. given the following
age distribution: 45, 53, 34, 27, 25, 34,45, 49, 52
 Distribution is bimodal.
 Has two modes: 45, 34
 Mode is useful when dealing with categorical data
 It’s not meaningful for continuous data

Median

57
PHT 422: Health Programme monitoring and Evaluation David Masinde
 Measurement with exactly half of measurements below and half above it
 Median=(n+1)/2-th observation
 E.g. given the scores: 2,3,4,5,7,7,9,10,12
 Median=(9+1)/2= 5-th observation which is 7
 The median is good for skewed distributions since it’s resistant to change
 It’s not affected by outliers

Definition of variables
A variable is any characteristic or attribute of an element of a population that can be
measured in some form such as age, height, weight, religion, marital status, etc. An element
is an individual unit being examined capable of being measured in some form e.g. students,
women, youth, adolescents, men, children etc.
 A dependent variable is the phenomena (happening) of interest that we are trying to
explain e.g. infant mortality( dead or alive), age at first marriage, use of contraception
(modern vs. other methods) etc.
 An independent variable is one that explains the variation (change) in the dependent
variable e.g. sex, residence, marital status, etc.

Levels of Measurement
Variables can be measured in three levels:
-nominal scale
-ordinal scale
-interval scale
 Nominal variables don’t have a natural ordering e.g. sex (male, female); marital
status (single, married, divorced/separated
 Ordinal variables have ordered scales or natural ordering e.g. social class (low,
middle, high); education level (none, primary, secondary, college)
 Interval variables have numerical distances between any two levels of the scale e.g.
age, income, weight, etc

Data Analysis Steps


 Start with a plan (during design stage)
 Review plan during data collection
 Main focus after data collection
 Completed only after report writing

Data Analysis – Plan


 It is guided by:
– Questions to be answered
– Data collection methods
– Kind of data collected
– Ways in which data will be used
 Review each question on the form in consultation with potential data users
 Indicate the type of analysis for each question – count, frequency, percentage,
average, change between pre- and post-test, or content analysis
 Subset the data

58
PHT 422: Health Programme monitoring and Evaluation David Masinde
Data Analysis – Plan

Do you want to report … Analysis

- The no. of people who answered? A count

- How many people answered ‘a’, ‘b’ or ‘c’? A frequency

- % who answered ‘a’, ‘b’, or ‘c’? A % distribution

- An average score? A mean

- A change in score from a pre-test to a post- A change in score


test?

- On open-ended questions? Content analysis

Data Analysis – Tabulation


 For each question, select an appropriate unit of analysis (e.g. person, facility, district)
 Prepare dummy tables
 Conduct the type of analysis selected and tabulate results for each question

Governing principles:
 Prepare tabulations even if coverage is incomplete
 Present simple definitions if the they differ from the standard
 Prepare tabulations regularly and timely
Data Analysis – Results
 Each table to be accompanied by an explanatory text
 Include annotation in case of limitations on data quality
 Where appropriate use figures, maps and graphs
Data Analysis in M&E
 Much of analysis done in typical M&E is straightforward
 Two types of statistics are used: descriptive and inferential statistics

 Descriptive Statistics
 Describe general characteristics of a set of data
 Examples: frequencies, counts, averages and percentages
 A frequency states a single numbers of observations or occurrences

Table8. 1: The sex composition of the population of Kenya, 1989

Sex Frequency

59
PHT 422: Health Programme monitoring and Evaluation David Masinde
Male 10,628,368

Female 10,815,268

Total 21,443,636

 Frequency Tables
Table shows that out of the total population of Kenya, males were 10,628, 368 while females

were 10,815, 268. This statement gives a frequency distribution Ratios


 Obtained by dividing class frequencies of a given frequency distribution
 Sex ratio is obtained by dividing frequencies in male categories with those of females
 E.g. Sex ratio=M/F*100
 Others: child woman ratio, dependency ratio
Proportions
 Give relative sizes of classes of a given frequency distribution
 For each class divide class frequency with total frequencies
 E.g. women 15-19, 1905 out of a population of 5564: 1905/5564=0.34
 Percentages are obtained by multiplying proportions by 100

Table 8.2: Relative age distribution of women -KCPS

Age group Women(f) proportion percent

15-19 1705

20-24 1290

25-29 1040

30-34 836

35-39 691

60
PHT 422: Health Programme monitoring and Evaluation David Masinde
40-45 559

45-49 461

Total 6581

Inferential data analysis and interpretation


Cross- Tabulations
 Main objective of any research undertaking is to explain phenomena of interest
 Cross-classifications are used to determine whether there is any association between
two or more variables (attributes in a data set)
 Researcher has to specify dependent and independent variables
 An independent variable is one that explains the variation (change) in the dependent
variable
 Chi-square test is used to test for statistical significance
 Test only tells us of the existence of an association but doesn’t explain direction of
effect
 Allow evaluator to make inferences about population from which sample data were
drawn
 Grounded in concept of probability or likelihood of an event occurring
 Rely on statistical significance
 Once appropriate method of analysis is determined ,one can begin to consider how
analysis will inform your program at each stage i.e. design, process and
outcome/impact
 Two inferential methods are considered here: logistic regression and linear regression
Logistic Regression
 Model used when dependent variable is dichotomous/binary/contrast i.e. takes only
two values
 Such data is generated by yes/no responses
 It’s flexible and easy to interpret
 The odds generated permit direct observation of relative importance of each
independent variable in predicting the dependent variable
 Odds ratios are used to make statistical inferences fro the population from the sample
Linear Regression
 An equation involving only two variables
 An equation involving more than two variables is known as multiple regression
 Used when dependent variable is continuous such as age at first marriage, first sex,
first birth, etc
 Independent variables can be continuous or categorical

Measuring program performance


Commonly used measures
 Coverage
 Utilization
 Trends

61
PHT 422: Health Programme monitoring and Evaluation David Masinde
– Coverage
– Utilization
Other measures
 Quality
 Efficiency
 Coverage
 Extent to which a programme reaches its intended target population, institution or
geographic area
 Achieved when people in need of services enter the programme and receive the full
set of services which they need
 Increasing coverage involves:
o bringing new clients into the programme; and
o retaining clients as long as needed to provide the intervention
 Coverage would be computed as the program-level indicator (numerator) divided by
the population in need (denominator)
 Utilization
 Level of service/commodity uptake by clients
 Level of utilization of programme services is a proxy measure of the investments by
the programme
 Guided by number of clients served and respective targets already set by the
programme
 Trends
 Monitoring programme performance using multiple points in time
o How has the coverage been changing over time?
o How has the utilization been changing over time?
 Useful concerning critical decisions of scale-up, replication or winding-up .
 Quality
 Focal point for facilities & programs
 Influences clients’ willingness to:
– seek for services
– continue receiving services
 Efficiency
 A measure of how economically or optimally inputs are used to produce outputs
 Program can be effective yet inefficient e.g. program may reach the targeted number
of expectant mothers but use more workers
 Need to balance between efficiency and effectiveness

62
PHT 422: Health Programme monitoring and Evaluation David Masinde
CHAPTER NINE: MONITORING AND EVALUATION INFORMATION USE
Once reports have been received, they will need to be combed to give meaning to the
organizations effort. In designing M&E system, we must define how:
1) Reported findings will be utilized.
2) Additional benefits from findings such as feedback, knowledge and learning will be
derived from the reports.
Information obtained from the reports shall be shared within and without the
organization.

Plan for communication as an integral part of M&E system


Charitable and nonprofit organizations should communicate the results of their project
evaluations to their external and internal stakeholders. Communicating evaluation results
internally is also crucial as it can assist your organization in making decisions on the future
of the current project, attract funding for other projects, and facilitate strategic planning.
Without communicating the results, evaluation is a waste of your organization's resources.

How to use evaluation results


You can use the results of a project evaluation to
1) Identify ways to improve or shift your project activities;
2) Facilitate changes in the project plan;
3) Prepare project reports (e.g., mid-term reports, final reports);
4) Inform internal and external stakeholders about the project;
5) Plan for the sustainability of the project;
6) Learn more about the environment in which the project is being or has been carried
out;
7) Learn more about the target population of the project;
8) Present the worth and value of the project to stakeholders and the public;
9) Plan for other projects
10) Compare projects to plan for their futures;
11) Make evidence-based organizational decisions;
12) Demonstrate your organization's ability in performing evaluations when searching for
funds; and demonstrate your organization's concerns to be accountable for
implementing its plans, pursuing its goals, and measuring its outcomes.

Communicating Evaluation results Preparing Reports


Producing a report is one way to communicate the results with your stakeholders such as
project funders, decision makers, planners, project managers, or those who act or modify
their actions based on the evaluation results. The report should include those aspects of the
project and its evaluation that are, based on your knowledge, important to the readers. The
report should also encourage them to use the information and recommendations.

Guidelines for preparing evaluation report


The following are some tips to prepare an evaluation report for various audiences.
How to prepare an evaluation report

1) Identify the potential readers of the report. They may be some of the stakeholders you
identified in your evaluation plan.
2) Choose clear understandable language geared toward the report's primary audience.

63
PHT 422: Health Programme monitoring and Evaluation David Masinde
3) Based on this audience, prioritize the evaluation questions that you want to answer in
this report.
4) Structure the materials in a way that leads readers from the key findings to the details
of the project.
5) Gather enough information to explain details such as budgeting and planning. If you
`know that your readers may want to duplicate your evaluation, you need to add such
details to the report. These details should be mainly descriptive.
6) If preparing a formative or process evaluation report, provide enough details about
the operational aspects of the project (i.e., what was done and how).
7) If preparing a summative or outcome evaluation report, provide enough details about
how you interpreted the results and drew conclusions.
8) Use graphs, charts, tables, diagrams, and other visual techniques to display the key
results as simply as possible.
9) Always prepare an executive summary (see below) if the report is longer than 15
pages.

Key element of an evaluation report


Evaluation report may include the following elements;
1) Executive Summary:
Include a short summary of the evaluation process and a complete summary of
results, objectives achieved, lessons learned, questions answered, and needs fulfilled.
It should be not more than two pages long.
2) Introduction:
Present the background and activities of the project and the purpose of the evaluation.
3) Evaluation Methods and Tools:
Explain the evaluation plan, approach, and tools used to gather information. Provide
some supporting materials such as a copy of the evaluation plan and the tools that
were developed.

4) Summary of Results:
Present results of the qualitative and quantitative data analysis
Interpretation of Results:
Explain the interpretation of results including impacts on participants and staff,
effectiveness of services, sustainability of project activities, strengths and weaknesses
of the project, and lessons learned (see Module Three).
5) Connection to the Project Objectives:
Highlight the value and achievements of the project and the needs or gaps that the
project addressed.
6) Conclusions:
Describe, overall, (a) how your project objectives were met, (b) how the purpose of
the evaluation was accomplished, and (c) how the project evaluation was completed.
7) Recommendations:
Summarize the key points, make suggestions for the project's future and create an
action plan for moving forward. You may present recommendations in the following
parts:
Refer to the usefulness of the results for your organization, or for others, in areas such
as decision-making, planning, and project management.

64
PHT 422: Health Programme monitoring and Evaluation David Masinde
 Refer to the project limitations, the assistance required, and resources that can
make future project evaluations more credible and efficient.
 Describe the changes you would make to your project if you were to carry it
out again and the suggestions you have for other organizations that may want
to conduct a similar project.

Presenting Results in Person


Presenting evaluation results to some project stakeholders in a face-to-face two-way method
gives your audience an opportunity to ask questions. It also provides you an opportunity to
directly communicate with your audience and receive direct feedback not only on the project
evaluation and its report but on the other needs, expectations, and concerns that they may
have.

Using the Media to Communicate Results


Using the media is another way to communicate all or part of the results to external
stakeholders. By getting your results published, you can increase the visibility of your
organization and contribute positively to the way its work is perceived by the public. Target
the audience who may be most interested in, and find potential benefits from, the results.
Taking action
Monitoring and evaluation have little value if the organisation or project does not act on the
information that comes out of the analysis of data collected. Once you have the findings,
conclusions and recommendations from your monitoring and evaluation process, you need
to:
1) Report to your stakeholders;
2) Learn from the overall process;
3) Make effective decisions about how to move forward; and, if necessary,
4) Deal with resistance to the necessary changes within the organisation or project, or
even among other stakeholders.

Reporting the Evaluation Information


Whether you are monitoring or evaluating, at some point, or points, there will be a reporting
process. This reporting process follows the stage of analyzing information. You will report to
different stakeholders in different ways, sometimes in written form, sometimes verbally and,
increasingly, making use of tools such as PowerPoint presentations, slides and videos.

Reporting requirements for Various Audiences


Target group Stage of project cycle Appropriate format
Written report
Board Interim, based on
monitoring analysis

Evaluation. Written report, with an Executive Summary, and


verbal presentation from the evaluation team

Interim, based on
Management team monitoring analysis Written report, discussed at Management team

65
PHT 422: Health Programme monitoring and Evaluation David Masinde
meeting.

Evaluation Written report, presented verbally by the


evaluation team

Staff Interim, based on Written and verbal presentation at departmental


monitoring and team levels.

Evaluation Written report, presented verbally by evaluation


team and followed by in-depth discussion of
relevant recommendations at departmental and
team levels

Beneficiaries Interim, but only at Verbal presentation, backed up by


significant points, and summarized document, using appropriate
evaluation tables, charts, visuals and audio-visuals.
This is particularly important if the
organisation or project is contemplating a
major change that will impact on
beneficiaries.
Donors Interim based on Summarized in a written report.
monitoring
Full written report with executive summary or a
special version, focused on donor concerns and
interests.
Wider development Evaluation Journal articles, seminars, conferences,
community Websites

Sustaining the M&E system within the organization


This step involves creating incentives and disincentives to sustain M&E system, a
documentation of the possible hurdles in sustaining a Results Based M&E system, validating
and evaluating M&E systems and information, using M&E to stimulate positive cultural
change with the organizations.
M&e continuum

66
PHT 422: Health Programme monitoring and Evaluation David Masinde
Decision
making

DATA DAMAND
 It is a measure of the value that the stakeholders and decision makers place on the
information, independent of their use of that information.
 For the purpose of defining demand, it is required that:
 Stakeholders actively and openly request information or
 Demonstrate they are using information in one of the various stages of the DDIU
framework
… thus,
 Data demand requires both of the following criteria:
 The stakeholders and decision makers specify what kind of information they want to
inform a decision; and
 The stakeholders and decision makers proactively seek out that information
DISSEMINATION
 The process of sharing information or systematically distributing information or
knowledge to potential users and/or beneficiaries
 Should produce an effective - use of information
Thus,
 The goal of dissemination is use

STAKEHOLDERS
 Beneficiaries
 Implementers
 Partners/Donors
 Data collectors
 Programme /Information managers
 Analysts
 Supervisors/Colleagues
 Policy-makers
 Private sector
WHY USE INFORMATION?

67
PHT 422: Health Programme monitoring and Evaluation David Masinde
 Strengthen programs
 Engage stakeholders
 Ensure accountability and reporting
 Advocate for additional resources
 Inform policies
 Contribute to global lessons learned
ESSENTIALS OF M&E INFORMATION
M&E information must
 be manageable and timely
 be presented according to the audience’s
Interest
capacity to understand and analyze
time, competing demands on time
 have transparent quality
 focus on activities, results of interest
 focus on meaning and direction for action

68
PHT 422: Health Programme monitoring and Evaluation David Masinde
CHAPTER 10: ECONOMIC EVALUATION
Economic evaluation definition: The comparative analysis of alternative courses of action
in terms of both their costs and consequences The basic tasks of any economic evaluation
are;
♦To identify,
♦Measure,
♦Value,
♦Compare the costs and consequences of the alternatives being considered.

 Features that characterize an Economic Evaluation/ Analysis


 It deals with both the inputs and outputs, sometimes called costs and
consequences, of activities.-allows us to reach our decision.
 Economic analysis concerns itself with choices. Resource scarcity, and our
consequent inability to produce all desired outputs, necessitates that choices must,
and will, be made in all areas of human activity.
 Economic analysis seeks to identify and to make explicit set of criteria, which
may be useful in deciding among different uses of scarce resources.
 Economics evaluations:
♦ Always compares any health care programme with an alternative, for
example, no treatment or routine care.
♦ Always measure the benefits produced by all alternatives compared.
♦Always measures the cost of any programme.
Importance of economic evaluation
 Economic evaluation of health care programmes aims to aid decision-making with
their difficult choices in allocating health care resources, setting priorities and
moulding health policy.
 To improve efficiency: the way inputs (money, labour, capital etc) can be converted
into outputs (saving life, health gain, improving quality of life, etc).
 Technical efficiency. This means that we might strive for minimum input for a given
output. For example, if we have decided that performing tonsillectomies on children
is worthwhile, part of an allocative efficient allocation of resources, then we may
need to examine the efficiency of how we do this. So, if the output we wish to

69
PHT 422: Health Programme monitoring and Evaluation David Masinde
achieve is to successfully remove a child’s tonsils then we might choose between,
say, a day case procedure or an inpatient stay. This is an issue of technical efficiency
since the output or ‘outcome’ is fixed, but the inputs will differ depending on which
policy we adopt. The day case approach may perhaps require more intensive staff
input and more follow-up outpatient visits. If this was the case, then inpatient
tonsillectomy may be the more technically efficient strategy.
Types of Economic Evaluation
 Cost-Effectiveness Analysis
-When different health care interventions are not expected to produce the same outcomes
both the costs and consequences of the options need to be assessed.
-This can be done by cost-effectiveness analysis, whereby the costs are compared with
outcomes measured in natural units-for example, per life saved, per life year gained, and pain
or symptom free day.
-CEA is concerned with technical efficiency issues, such as:
 What is the best way of achieving a given goal? or
 What is the best way of spending a given budget?
-Comparisons can be made between different health programmes in terms of their cost
effectiveness ratios: cost per unit of effect.
-Under CEA effects are measured in terms of the most appropriate uni-dimensional natural
unit. So, if the question to be addressed was: what is the best way of treating renal failure?
Then the most appropriate ratio with which to compare programmes might be ‘cost per life
saved’.
-Similarly, if we wanted to compare the cost-effectiveness of programmes of screening for
Down’s syndrome the most appropriate ratio might be ‘cost per Down’s syndrome fetus
detected’.
-In deciding whether long-term care for the elderly should be provided in nursing homes or
the community the ‘cost per disability day avoided’ might be the most appropriate measure..
-If the outcomes of alternative procedures or programmes under review are the same, or very
similar, then attention can focus upon the costs in order to identify the least cost option-the
method of evaluation will be cost-minimization analysis.
-If, however, the outcomes are not expected to be the same, then both the costs and
consequences of alternative options need to be considered. Cost-effectiveness analysis is one

70
PHT 422: Health Programme monitoring and Evaluation David Masinde
method of economic evaluation that allows this to be done.
1. Measures of Effectiveness
-In order to carry out a cost effectiveness analysis it is necessary to have suitable measures of
effectiveness.
-These will depend on the objectives of the particular interventions under review.
-In all cost effectiveness analysis, however, measures of effectiveness should be defined in
appropriate natural units and, ideally, expressed in a single dimension.
-Common measures used in several studies have been “lives saved” and “life years gained”.
Several other measures of effectiveness have been used by different researchers these have
included see table below

Examples of measures of effectiveness


•Cases treated appropriately
•Lives saved
•Life years gained
•Pain or symptom free days
•Cases successfully diagnosed
•Complications avoided

2. Costs-Minimization Analysis
-Cost-minimization analysis is an appropriate evaluation method to use when the case for an
intervention has been established and the programmes and procedures under consideration
are expected to have the same or similar outcomes.
-In these circumstances, attention may focus on the cost side of the equation to identify the
least costly option.
-Cost –Minimization
 Is concerned only with technical efficiency
 Can be regarded as a narrow form of cost effectiveness analysis
 Evidence is given on the equivalence of the outcomes of different interventions
 As outcomes are considered to be equivalent no different decisions can be made
on the basis of costs
Advantages

71
PHT 422: Health Programme monitoring and Evaluation David Masinde
 Simple to carry out, requires costs to be measured, but only that outcomes can
be shown to be equivalent
 Avoids needlessly quantifying data
Disadvantages
 Can only be used in narrow range of situations.
 Requires that outcomes be equivalent
-CMA is really a special form of cost-effectiveness analysis, where the consequences of the
alternative treatments being compared turn out to be equivalent.
-In general Cost-effectiveness analysis (CEA) is:
 Concerned with technical efficiency.
 What is the best way of achieving a given goal with least resources?
 What is the best way of spending a given budget?
 Used when the interventions being compared can be analyzed with common
measures.
Advantage of the CEA approach
 It is relatively straightforward to carry out
It is often sufficient for addressing many questions in health care
-Relatively simple to carry out.
Disadvantages of CEA approach
 Since outcome is uni-dimensional, cannot incorporate other aspects of outcome into
the cost-effectiveness ratio.
 Interventions with different aims/goals cannot be compared with one another in a
meaningful way.
 Meanings of outcome measure not always clear, i.e. what is value of a case detected
in a screening programme.
 May have situations when the option with the highest cost effectiveness ratio should
be chosen.
3. Discounting Benefits (in cost-effectiveness analysis)
-Costs incurred at different points in time need to be “weighted” or discounted to reflect the
fact that those that occur in the immediate future are of more importance than those that
accrue in the distant future. This raises the question: should the benefits or effects of
alternative procedures also be discounted? (For details about discounting refer to section

72
PHT 422: Health Programme monitoring and Evaluation David Masinde
three of this material)
-In answering this issue there is a difference among economists. If a zero discounting (no
discounting applied) were adopted, the main consequence would be to change the relative
cost effectiveness of different procedures.
-Using a positive discount rate means that projects with long lasting effects receive lower
priority. If a positive rate is replaced by a zero rate, procedures such as neonatal care-which
lead to benefits over the recipient’s entire future lifetime-will, become relatively more cost
effective. In practical terms, it is probably true to say that while the case for using a zero
discount rate for benefits has powerful intellectual and may gain empirical support in the
future, it will be too hasty to recommend that positive rates be discarded in economic
evaluations.
In general:
 Cost-effectiveness analysis is a form of economic evaluation in which the costs of
alternative procedures or programmes are compared with outcomes measured in natural
units-for example, cost per life year saved, cost per case cured, cost per symptom free
day.
 Effectiveness data are ideally collected from economic evaluations built in alongside
clinical trials. In the absence of dedicated trials researchers need to draw on the existing
published work.
 Sensitivity analysis should be applied when there is uncertainty about the costs and
effectiveness of different procedures. This investigates the extent to which results are
sensitive to alternative assumptions about key variables.
 There is debate among economists about whether benefit measures should be “time
discounted” in the same way as costs. If they are not, projects with long lasting effects
will become relatively more cost effective-for example, maternity services and health
promotion. But it will be probably wrong to recommend this as a standard practice.
 Cost-Utility Analysis (CUA)
-CUA is concerned with technical efficiency and allocative efficiency (within the health care
sector).
-CUA tends to be used when quality of life is an important factor involved in the health
programmes being evaluated.
-This is because CUA combines life years (quantity of life) gained as a result of a health

73
PHT 422: Health Programme monitoring and Evaluation David Masinde
programme with some judgment on the quality of those life years.
-It is this judgment element that is labeled utility.
-Utility is simply a measure of preference, where values can be assigned to different states of
health (relevant to the programme) that represent individual preferences.
-This is normally done by assigning values between 1.0 and 0.0, where 1.0 is the best
imaginable state of health (completely healthy) and 0.0 is the worst imaginable (perhaps
death).
-States of health may be described using many different instruments which provide a profile
of scores in different health domains.
This approach of using utility is not restricted to similar clinical areas, but can be used to
compare very different health programmes in the same terms.
-As a result, ‘cost per QALY gained’ league tables are often produced to compare the relative
efficiency with which different interventions can turn resources invested into QALYs gained.
-It is possible to compare surgical, medical, pharmaceutical and health promotion
interventions with each other.
-Comparability then is the key advantage of this type of economic evaluation.
-For a decision-maker faced with allocating scare resources between competing claims, CUA
can potentially be very informative.
-However, the key problem with CUA is the difficulty of deriving health benefits. Can a state
of health in fact be collapsed into a single value? If it can then, whose values should be
considered in these analyses? For these reasons, CUA remains a relatively little used form of
economic evaluation.
 When should CUA be used?
CUA should be used:
 When health-related quality of life is the important outcome. For example, in
comparing alternative programmes for the treatment of arthritis, no programme is
expected to have any impact on mortality, and the interest is focused on how well
the different programmes will be improving the patient’s physical function, social
function, and psychological well being;
 When the programme affects both morbidity and mortality and we wish to have a
common unit of outcome that combines both effects. For example, treatments for
many cancers improve longevity and improve long-term quality of life, but

74
PHT 422: Health Programme monitoring and Evaluation David Masinde
decrease quality of life during the treatment process itself.
 When the programmes are being compared have a wide range of different kinds
of outcomes and we wish to have a common unit of output for comparison. For
example, if you are a health planner who must compare several disparate
programmes applying for funding, such as expansion of neonatal intensive care, a
programme to locate and treat hypertension, and a programme to expand the
rehabilitative services provided to post-myocardial infarction patients;
 .When we wish to compare a programme to others that have already been
evaluated using cost-utility analysis.
 When CUA should not be used?
 When only intermediate outcome data can be obtained. For example, in a study to
screen employees for hypertension and treat them for one year, intermediate
outcomes of this type cannot be readily converted into QALYs for use in CUA.
 When the effectiveness data show that the alternatives are equally effective in all
respects of importance to consumers (e.g. including side-effects). In this case, cost-
minimization analysis is sufficient; CUA is not needed;
 When the effectiveness data show that the new programme is dominant; that is, the
new programme is both more effective and less costly (win-win). In this case, no
further analysis is needed;
 When the extra cost of obtaining and using utility values is judged to be in itself not cost
effective. This is the case above in points 2 and 3. It would also be the case even when the new
programme is more costly than the old, if effectiveness data show such an enormous superiority
for the new programme that the incorporation of utility values could almost certainly not change
the result. It might even be the case with a programme that is more costly and only somewhat
more effective, if it can be credibly argued that the incorporation of any reasonable utility values
will show the programme to be overwhelmingly cost-effective.
 Measuring Quality
Measuring a person’s quality of life is difficult. Nonetheless, it is important to have some
means to have for doing so since many health care programmes are concerned primarily with
improving the quality of a patient’s life rather than extending its length. Various quality of
life scales have been developed in recent years.
The Nottingham health profile is one quality of life scale that has been used quite widely in

75
PHT 422: Health Programme monitoring and Evaluation David Masinde
Britain. This comprises of two parts.
 The first measures health status by asking for yes or no responses from patients
to a set of 36 statements related to six dimensions of social functioning:
a) Energy,
b) Pain,
c) Emotional reactions,
d) Sleep,
e) Social isolation,
f) Physical mobility.
These responses are then “weighted” and a score of between 0 and 100 is assigned to
each dimension.
 The second part asks about seven areas of performance that can be expected to be
affected by health:
Employment,
Looking after the home,
Social life,
Home life,
Sex life,
Hobbies,
Holidays.
The Nottingham health profile has been applied, for example, in studies of heart
transplantation, rheumatoid arthritis and migraine, and renal lithotripsy.
Other quite widely used measures include the sickness impact profile and the quality of
wellbeing scale
 Rosser Index
-Rosser and her colleagues described health status in terms of two dimensions: disability and
distress.
-The states of illness are classified into eight categories of disability and four categories of
distress.
-By combining these categories of disability and distress 32 (8 times 4), different states of
health were obtained.
-Rosser then interviewed 70 respondents (a mixture of doctors, nurses, patients and healthy

76
PHT 422: Health Programme monitoring and Evaluation David Masinde
volunteers) and, by using psychometric techniques sought to establish their views about the
severity of each state relative to other states.
-The final results of this exercise were expressed in terms of a numeric scale extending from
0 = dead to 1 = perfect health.
-With this classification system it becomes possible to assign a quality of life score to any
state of health as long as it is placed in an appropriate disability or distress category.
 Quality – Adjusted Life – Years (QALY)
-One of the features of conventional CUA is its use of the QALY concept; results are
reported in terms of cost per QALY gained QALYs: -combine life years gained with a
measure of the quality of those years.
-Quality is measured on a scale of 0 to 1. With 0 equated to being dead and 1 equated to the
best imaginable state of health.
Combine all dimensions of health & survival into a single index.
-Cu ratio = cost A – cost B
QALYA– QALYB

What is the QALY concept?


The advantage of the QALY as a measure of health outcome is that it can simultaneously
capture gains from reduced morbidity (quality gains) and reduced mortality (quantity gains),
and combine these into a single measure. Moreover, the combination is based on the relative
desirability of the different outcomes.
-The QALY approach, which forms a key part of most cost-utility analyses, has been the
subject of some criticism.
-It has been accused of discriminating against elderly people, making illegitimate
interpersonal comparisons, disregarding equity considerations, and introducing bias into
quality of life scores.
-The following case study shows how cost-utility analysis

 Cost-Benefit Analysis
-Cost benefit analysis is the most comprehensive and theoretically sound form of economic
evaluation and it has been used as an aid to decision making in many different areas of
economic and social policy in the public sector for more than fifty years.

77
PHT 422: Health Programme monitoring and Evaluation David Masinde
-CBA estimates and totals up the equivalent money value of the benefits and costs to the
community of projects to establish whether they are worthwhile. These projects may be dams
and highways or can be training programmes and health care systems.
-The main difference between cost-benefit analysis and other methods of economic
evaluation that were discussed earlier in this series is that it seeks to place monetary values
on both the inputs (costs) and outcomes (benefits) of health care.
-Among other things, this enables the monetary returns on investments in health to be
compared with the returns obtainable from investments in other areas of the economy.
-Within the health sector itself; the attachment of monetary values to outcomes makes it
possible to say whether a particular procedure or program offers an overall net gain to society
in the sense that its total benefits exceed its total costs.
-CBA requires programme consequences to be valued in monetary units, thus, enabling the
analyst to make a direct comparison of the programmes incremental cost with its incremental
consequences in commensurate units of measurement, be they dollars, or pounds.
-CBA compares the discounted future streams of incremental programme benefits with
incremental programmes costs; the difference between these two streams being the net social
benefit of the programme.
-CBA goal of analysis is to identify whether a programme’s benefits exceed its costs, a
positive net social benefit indicating that a programme is worthwhile.
-CBA is a full economic evaluation because programme outputs must be measured and
valued. In many respects CBA is broader in scope than CEA/CUA..
-CBA is broader in scope and able to inform questions of allocative efficiency, because it
assigns relative values to health and non-health related goals to determine which goals are
worth achieving, given the alternative uses of resources, and thereby determining which
programmes are worthwhile.
 Both costs and benefits are assigned a monetary value. The benefits of any
intervention can then be compared directly with any costs incurred.
 If the value of benefits exceeds the costs of any intervention, then it is potentially
worthwhile to carry that intervention out.
 If society funds projects for a given budget, then it can maximize the benefits
generated by social spending. It is concerned with allocative efficiency. It is
concerned with the question, is a particular goal worthwhile. Potentially it can

78
PHT 422: Health Programme monitoring and Evaluation David Masinde
answer questions such as should extra money be used for heart transplants or
improving housing.
 Method requires that all resources and benefit generated by an intervention need
to be assigned a monetary value. Therefore, needs to cost things which have no
market value, i.e., changes in health, quality of life, length of life, pain, etc.

79
PHT 422: Health Programme monitoring and Evaluation David Masinde

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy