PHT 422 Programme Monitoring and Evaluation Corected
PHT 422 Programme Monitoring and Evaluation Corected
PHT 422 Programme Monitoring and Evaluation Corected
MASENO UNIVERSITY
SCHOOL OF PUBLIC HEALTH AND COMMUNITY DEVELOPMENT
(SPHCD)
BSC PUBLIC HEALTH WITH IT
Year 2 semester 2
COURSE OUTLINE
COURSE FACILITATOR: D. Masinde
E-MAIL ADDRESS:dmasinde2004@yahoo.com
1. Course Code: PHT422:
2. Course Title: HEALTH PROGRAMME MONITORING AND EAVALUATION
(ST- 1 UNITS)
3. Introduction: Monitoring and Evaluation are important tools which an organization can
use to demonstrate its accountability, improve its performance, increase its abilities for
obtaining funds or future planning and fulfill the organizations objectives. By
communicating the results of the evaluation, your organization can inform its staff, board
of directors, service users, funders, the public, or other stakeholders about the benefits,
efficiency, impact, lessons learnt and effectiveness of organizations services and
programs.
4. Course description : Concepts and Principles of Planning, Monitoring and
Evaluation: Introduction to Planning, Monitoring and Evaluation, relationship between
monitoring and evaluation, defining program components, different types of M&E; Project
design-project life cycle, stakeholder analysis and management, project control, critical path
planning and project resources scheduling and project flowcharts
Monitoring & Evaluation Frameworks: M&E frameworks: conceptual frameworks, logical
frameworks, result frameworks and M&E plan, Developing indicators, Measurement of results,
supervision of performance monitoring, planning and implementing participatory monitoring
Evaluation Processes: Planning an evaluation activity, data quality, Designing an evaluation
and Conducting an evaluation, impact assessment
Data Analysis And Report Writing: Qualitative and Quantitative data analysis; process,
methods, interpretation, presentation; Report writing and presentation skills; Designing Health
and Information Systems and Management Information Systems (HIS/MIS); Emerging issues in
M&E and HIS/IMS. Economic evaluations: CBA, CUE,CEA
COURSE CONTENT
LESSON TOPIC/SUB-TOPIC CONTENT
1 Introduction to monitoring and evaluation Purpose, types of M/E,
approaches of M/E,
2 Logical framework matrix Principles of M/E, logical
framework, steps for
designing logical
frameworks
3 Steps for designing an M/E system Readiness assessment,
agree on outcomes to
monitor, Construct
indicators, TORs,
Evaluation questions,
evaluation tools
4 Steps for designing M/E system Data collection tools,
ethical review,Analysing
data,
5 CAT CAT1
6 Information gathering in M/E Quantitative and qualitative
data, means of gathering the
information, tools of data
collection, Analyzing data
7 M/E information use Users of M/E information,
8 M/E report writing, and communication of CBA,CUE,CEA
findings ,Economic evaluations
CAT2
9 CAT2 CAT2
10 M/E Case studies Malaria, TB, VCT,
Training programmes,
EVALUTION
CATS and Assignments -40%
Final exam-60%
Concepts
Monitoring and Evaluation are important tools which an organization can use to demonstrate its
accountability, improve its performance, increase its abilities for obtaining funds or future
planning, and fulfill the organizational objectives. By communicating the results of the
evaluation, your organization can inform its staff, board of directors, service users, funders, the
public, or other stakeholders about the benefits and effectiveness of your organization's services
and programs, and explain how charities work and how they are monitored.
Project evaluation and project management are interrelated. Evaluation can help you complete a
project successfully, provide evidence of successes or failures, suggest ways for improvements,
and inform decisions about the future of current and planned projects.
Project evaluation is an accountability function. By evaluating a project, you monitor the process
to ensure that appropriate procedures are in place for completing the project on time, and you
identify and measure the outcomes to ensure the effectiveness and achievements of the project.
Monitoring
Monitoring is the systematic collection and analysis of information as project progresses.
It is aimed at improving the efficiency and effectiveness of a project or organisation.
It is based on targets set and activities planned during the planning phases of work. It
helps to keep the work on track, and can let management know when things are going
wrong.
If done properly, it is an invaluable tool for good management, and it provides a useful
base for evaluation- monitoring is an input to project evaluation.
It enables you to determine whether the resources you have are sufficient and are being
well used, whether the capacity you have is sufficient and appropriate, and whether you
are doing what you planned to do.
Types of Monitoring
Project Monitoring and evaluation can be categorized according to elements or levels measured
or according to the person or institution which takes lead in setting up, managing and using the
system.
Project evaluation involves Assessment of activities that are designed to perform a specified task
in a specific period of time.
Evaluation is the comparison of actual project impacts against the agreed strategic plans. It looks
at what you set out to do, at what you have accomplished, and how you accomplished it.!
What monitoring and evaluation have in common is that they are geared towards learning from
what one is doing and how one is doing it, by focusing on the following aspects:
Efficiency tells you that the input into the work is appropriate in terms of the output. This
could be input in terms of money, time, staff, equipment and so on. When you run a
project and are concerned about its Replicability or about going to scale, then it is very
important to get the efficiency element right.
Effectiveness is a measure of the extent to which a development programmes or project
achieves the specific objectives it set. If, for example, we set out to improve the
qualifications of all the high school teachers in a particular area, did we succeed?
Lessons learnt
Impact tells you whether or not what you did made a difference to the problem situation
you were trying to address.
Types of Evaluations
Different forms of evaluations are conducted by development actors. Evaluations may be
categorized according to the following;
1) The person takes the lead or participates in the evaluation,
2) Level of emphasis
3) Time at which an intervention is evaluated
4) Nature of intervention being evaluated.
Important notes
Evaluation could focus on implementation :
1) Traditional
2) process evaluation
3) Implementation together with the results (results based evaluation).
In terms of intervention being implemented:
1) project,
2) programme,
5
Forms of Evaluation
Depending on the above criteria explained, the Forms of Evaluation can be explained as follows;
1) Ex-ante or prospective Evaluation: An evaluation that is performed before
implementation of development intervention (Project appraisal).
Project evaluation helps you understand the progress, success, and effectiveness of a project. It
provides you with a comprehensive description of a project, including insight on the;
i. Needs your project will address;
ii. People who need to get involved in your project;
iii. Definition of success for your project;
iv. Outputs and immediate results that you could expect;
v. Outcomes your project is intended to achieve;
vi. Activities needed to meet the outcomes; and
vii. Alignment and relationships between your activities and outcomes.
In building Monitoring & Evaluation systems, the following actions are essential:
Formulation of outcomes and goals
Selection of outcome indicators to monitor
Gathering baseline information on the current condition
Setting specific targets to reach and dates for reaching them
Regularly collecting data to assess whether the targets are being met
Analyzing and reporting results
3) Good monitoring requires regular visits by staff that focus on results and follow-up to
verify and validate progress, in addition, the Programme. Manager must organize visits
and/or bilateral meetings dedicated to assessing progress, looking at the big picture
and analyzing problem areas.
4) Regular analysis of reports such as the annual project report (APR) is another minimum
standard for good monitoring.
5) Good monitoring finds ways to objectively assess progress and performance based on
clear criteria and indicators. To better assess progress towards outcomes, country offices
must make an effort to improve their performance measurement system by developing
indicators baselines.
6) Assessing the relevance, performance and success of project development interventions
also enhances monitoring.
The process of choosing outcomes involves building a participatory and consultative process
involving stakeholders. To set and agree upon outcomes, follow the following steps:
Identify specific stakeholder representatives
Identify major concerns of stakeholder groups
Translate problems into statements of possible outcome improvements
Disaggregate to capture key desired outcomes
10
Type of activity
Number of events
Start
Finish
Location (s)
Participants
Age range
Gender
Other specifications (e.g. education,
social / economic status, ethnicity)
Outputs
Time
Budget
Staff
Amendments
Comments
Background/project description
This involves writing a project description that will give a clear understanding of the project
from the start before undertaking evaluation. Project description includes the following;
1) The needs and objectives that the project will address;
2) The target group that will take action in this project;
3) The target group that will be affected by the project;
4) The planned outcomes of the project; and
5) The activities that are required to meet those outcomes.
Note:
The purpose of an evaluation is the reason why you are doing it. It goes beyond what you want to
know to why you want to know it. It is usually a sentence or, at most, a paragraph. It has two
parts:
i. What you want evaluated;
ii. To what end you want it done.
12
Evaluation tools can use both formal and informal methods for gathering information. Formal
evaluation tools include focus groups, interviews, survey questionnaires, and knowledge tests.
Informal evaluation tools include observations, informal conversations, and site visits.
Depending on your evaluation questions, you may need a tool that helps you gather quantitative
information by numbering, rating and ranking information.
Communication
Prepare and review an
evaluation report
Prepare presentation
Prepare other media related
materials(where applicable)
Present findings to various
stakeholders
Total
Travel and meetings
Related to the evaluation
group
Others
Total
Operating expenses
Photocopying / printing
Couriers
Phone / fax etc
Total
Grand total
17
18
Disclose any conflict of interest that you or any member of the evaluation group may
have.
Clarify your staffs and your own credibility and competence in undertaking the
evaluation. Anticipate your collective shortcomings, and ask for solutions and help to
mitigate them.
Be aware of any substantial risks that this evaluation may pose for various stakeholders
and discuss them with the evaluation group.
Remain unbiased and fair in all stages of evaluation. Make sure that your personal
opinions toward a group, topic, or social matter will not interfere with your evaluation
work.
Be ready to negotiate when dealing with various stakeholders and their expectations.
Be clear and accurate in reporting the evaluation results, and explain the limitations of the
work and recommendations for improvements.
19
Indicators enable you to reduce a large amount of data down to its simplest form (e.g.
percent of clients who tested after receiving pre-test counseling, prevalence rate).
When compared with targets or goals, indicators can:
signal the need for corrective management action,
evaluate the effectiveness of various management actions, and
provide evidence as to whether objectives are being achieved
Expressing indicators
An indicator is usually expressed in numerical form:
number
ratio
percentage
average
rate
index (composite of indicators)
An indicator can also be expressed in non-numerical form such as in words
Non-numerical indicators are also referred to as qualitative or categorical indicators
Characteristics of a Good Indicator
Prepare and update list of program indicators to track each level of result
Identify data requirements and sources for each indicator
Limit number to program management and information reporting obligation needs
(Quantity+quality+time=QQT)
As an example
Rice yields 9of same quality as 1997 crop) of small scale farmers (owing 3
hectares or less) increased by X bushels
Rice yields (of same quality as 1997 crop) of small scale farmers (owing 3 hectares
or less) increased by X bushels by the end of 1998 harvest.
Category of indicators
Often describe program or sector objectives to which this project and several
others are directed. For this reason, the Goal level Indicators may include targets
beyond the scope of this project, such as small farmer income increased where
farmer income may be increased by the combined outcomes of several projects.
2. Purpose Level Indicators-The project purpose is the one many reasons why
you are doing the project. It is why you are producing outputs.
3. Output Level Indicators-By definition, these indicators establish the terms
of reference for the project if a project team or contractor is responsible for
all the outputs then these indicators define the deliverables for which the
contractor is accountable.
22
23
Conceptual frameworks
Conceptual, or “research”, frameworks (models) are diagrams that identify and illustrate the
relationships among systemic, organizational, individual, or other salient factors that may
influence program/project operation and the successful achievement of program or project goals.
New C Rate of
Partner Contact of
acquisitio susceptible
n to infected
Mixing persons HIV
Condom
patterns incide
use B
Concurre Efficiency
nce Mortali
Concurrent
ncy of
STI ty
Abstinenc
Risky transmissio
e
sexual n STI
practices per contact inciden
Chemothera ce
Treatment D Duration
py
with ARV, of
Treatment infectivity
of
opportunisti
Source: Boerma, and Weir 2005.
c infections
Objectives
Objective 1: Number of new Infections reduced
Objective2: Improved health & quality of life of
people infected & affected by HIV/AIDS
Objective 3: Strengthened capacity of NACC & stakeholders to respond to the HIV/AIDS
epidemic at all levels through improved research, M&E and improved management &
coordination
Results Framework
Results frameworks are diagrams that identify steps, or levels, of results, and illustrate the
causal relationships linking all levels of a program’s objectives. Other terms used: Strategic
frameworks. It focus on the end result(s) and the strategies that we can use to achieve them
Identifies the logic and links behind programs and to identify necessary and sufficient elements
for success
Elements of Results Frameworks
Goal Statement— the change in health conditions that we hope to achieve
Strategic (or Key) Objective (SO)—the main result that will help us achieve our goal and
for which we can measure change
Intermediate Results (IRs)—the things that need to be in place to ensure achievement of
the SO
Strategies & Activities —what a project does to achieve its intermediate results that
contribute to the objective
25
for FP program
IR2.3 Improved job performance of
health providers, trainers, and administrators
Logical Frameworks
A logical framework (LogFRAME) is a management tool for strategic planning and
program/project management. It looks like a table (or framework) and aims both to be logical to
complete, and to present information about projects in a concise, logical and systematic way.
A LogFRAME summarizes, in a standard format:
What your project is trying to achieve
How it aims to do this
What is needed to ensure success
Ways of measuring progress and the potential problems along the way
Purposes of logFrame:
Summarizes what the project intends to do and how
Summarizes key assumptions
Summarizes outputs and outcomes that will be monitored
and evaluated
26
Output 2
Activity 2.1
Activity 2.2
Activity 2.3
Activity 2.4
Inputs
Cost/
Unit
Description
Input 1
Input 2
Input 3
27
Taskforce on Communicable Disease Controlling the Barents and Baltic Sea Regions:
Tuberculosis
28
VERIFICATION
30
As an example, you could argue that if you achieve the output to supply farmers with
improved seed then the Purpose of increased production will be seen.
As you make the cause and effect linkages between objectives at the different levels stronger,
your project design will be improved.
The logical framework forces you to make this logic explicit. It does not guarantee a good
design because the validity of the cause and effect logic depends on the quality and
experience of the design team.
31
PHT 422: Health Programme monitoring and Evaluation David Masinde
If cause and effect is the core concept of good project design, necessary and sufficient
conditions are the corollary. The necessary conditions describe the cause and effect
relationship between the Activity- to Output, Output-to-Purpose and the Purpose-to-Goal
objectives for accomplishing project objectives. This is the internal logic, but it does not
define the different conditions at each level for accomplishing the next higher level.
Step 7: Define the Objectively Verifiable Indicator (OVI) at Goal then Purpose then
Output then Activity levels.
32
PHT 422: Health Programme monitoring and Evaluation David Masinde
The fewer the better. Use only the number of indicators required to clarify what must be
accomplished to satisfy the objectives stated in the Narrative Summary column.
Step 10: Check the Logical Framework using the Project Design Checklist
Work through the project design checklist as an aid to ensuring that your project meets all the
requirements of a well designed Logical framework.
Step 11: Review the Logical Framework Design in the light of previous experience
You should have been thinking about your previous experience of projects throughout the
preparation for the logical framework.
Advantages of Logical Frame Work Analysis
The major advantages of the Logical Framework approach are:
1. It brings together in one place a statement of all key components of the project or
programme.
2. It meets the requirement of good project design and enables possible responses to past
weakness in many designs.
3. It is easy to learn and use.
4. It does not add time or effort to project management but reduces it.
5. It anticipates implementation.
6. It sets up a framework for monitoring and evaluation where planned and actual results
can be compared.
2. Rigidity in Project management may arise when objectives and external factors
specified during design are over emphasized.
3. It requires a team process with good leadership and facilitation skills to be most
effective.
33
PHT 422: Health Programme monitoring and Evaluation David Masinde
Logic Models
Problem Statement
Implementation
Input Activities Outputs
Outcomes
Impacts
Inputs: Resources used in an program, such as money, staff, curricula, and materials.
– GAP, government, & other donor fundp
– C&T personnel
– VCT protocols and guidance
– Training materials
– HIV test kits
Activities: Services that the program provides to accomplish its objectives, such as
outreach, materials distribution, counseling sessions, workshops, and training.
– Train C&T personnel and site managers
– Provide pre-test counseling, HIV tests, post-test counseling
Outputs: Direct products or deliverables of the program, such as intervention
sessions completed, people reached, and materials distributed.
– # personnel certified
# clients receiving pre-test counseling, HIV tests, post-test
Outcomes: Program results that occur both immediately and some time after the
activities are completed, such as changes in knowledge, attitudes, beliefs, skills,
behaviors, access, policies, and environmental conditions.
– Quality of VCT improved
34
PHT 422: Health Programme monitoring and Evaluation David Masinde
– Access to VCT increased
– Clients develop & adhere to personalized risk-reduction and treatment
strategy
Impacts: Long-term results of one or more programs over time, such as changes in
HIV infection, morbidity, and mortality
– HIV transmission rates decrease
– HIV incidence decreases
– HIV morbidity and mortality decrease
INPUT
Human and financial resources to develop and print educational brochure
PROCESS
Distribute brochure to health facilities
Meet with physicians to promote distribution of brochure
OUTPUT
• Brochure distributed to clients of facilities
OUTCOME
• Increased customer knowledge of TB transmission and treatment
• Increased demand for quality TB services
IMPACT
• Decreased TB infection, morbidity and mortality
35
PHT 422: Health Programme monitoring and Evaluation David Masinde
States outcomes that are within the scope of the program’s influence
36
PHT 422: Health Programme monitoring and Evaluation David Masinde
CHAPTER FIVE: EVALUATION DESIGNS
Designs aims at describing the operations of the programme and its immediate results
(outputs) and challenges. Evaluation design is a plan for evaluation and is linked to
evaluation questions and consists of
– Methods for addressing them
– Data collection
– Analysis
37
PHT 422: Health Programme monitoring and Evaluation David Masinde
Further, requirement of this design is that items after being selected randomly from the
population, be randomly assigned to the experimental and control groups (Such random
assignment of items to two groups is technically described as principle of randomization).
Thus, this design yields two groups as representatives of the population. In a diagram form
this design can be shown in this way Since in the sample randomized design the elements
constituting the sample are randomly drawn from the same population and randomly
assigned to the experimental and control groups, it becomes possible to draw conclusions on
the basis of samples applicable for the population.
SAMPLING DESIGN
Sampling may be defined as the selection of some part of an aggregate or totality on the basis
of which a judgement or inference about the aggregate or totality is made. In other words, it
is the process of obtaining information about an entire population by examining only a part
of it. In most of the research work and surveys, the usual approach happens to be to make
generalizations or to draw inferences based on samples about the parameters of population
from which the samples are taken.
Sampling design: A sample design is a definite plan for obtaining a sample from the
sampling frame. It refers to the technique or the procedure the researcher would adopt in
selecting some sampling units from which inferences about the population is drawn.
Sampling design is determined before any data are collected.
2) SNOWBALL SAMPLING: This technique involves asking individuals who have already
responded to a survey to identify additional respondents, and is useful when the
38
PHT 422: Health Programme monitoring and Evaluation David Masinde
members of a population are hard to reach or identify (e.g., people who participate in
a particular activity, members of a particular organization). You can use this
technique in conjunction with either random or convenience sampling. This technique
also results in a sample that does not represent the entire population.
3) Quota sampling
4) Purposive
Probability sampling:
Probability sampling is also known as ‘random sampling’ or ‘chance sampling’. Under this
sampling design, every item of the universe has an equal chance of inclusion in the sample.
They include:
Systematic sampling: In some instances, the most practical way of sampling is to
select every ith item on a list. Sampling of this type is known as systematic sampling.
An element of randomness is introduced into this kind of sampling by using random
numbers to pick up the unit with which to start. For instance, if a 4 per cent sample is
desired, the first item would be selected randomly from the first twenty-five and
thereafter every 25th item would automatically be included in the sample. Thus, in
systematic sampling only the first unit is selected randomly and the remaining units
of the sample are selected at fixed intervals. Although a systematic sample is not a
random sample in the strict sense of the term, but it is often considered reasonable to
treat systematic sample as if it were a random sample.
Stratified sampling: Under stratified sampling the population is divided into several
sub-populations that are individually more homogeneous than the total population
(the different sub-populations are called ‘strata’) and then we select items from each
stratum to constitute a sample. Since each stratum is more homogeneous than the total
population, we are able to get more precise estimates for each stratum and by
estimating more accurately each of the component parts; we get a better estimate of
the whole.
Cluster sampling: If the total area of interest happens to be a big one, a convenient
way in which a sample can be taken is to divide the area into a number of smaller
non-overlapping areas and then to randomly select a number of these smaller areas
(usually called clusters), with the ultimate sample consisting of all (or samples of)
units in these small areas or clusters. Thus in cluster sampling the total population is
divided into a number of relatively small subdivisions which are themselves clusters
of still smaller units and then some of these clusters are randomly selected for
inclusion in the overall sample.
Area sampling: If clusters happen to be some geographic subdivisions, in that case
cluster sampling is better known as area sampling. In other words, cluster designs,
where the primary sampling unit represents a cluster of units based on geographic
area, are distinguished as area sampling.
Multi-stage sampling: Multi-stage sampling is a further development of the principle
of cluster sampling. Suppose we want to investigate the working efficiency of
nationalized banks in India and we want to take a sample of few banks for this
39
PHT 422: Health Programme monitoring and Evaluation David Masinde
purpose. The first stage is to select large primary sampling unit such as states in a
country. Then we may select certain districts and interview all banks in the chosen
districts. This would represent a two-stage sampling design with the ultimate
sampling units being clusters of districts. If instead of taking a census of all banks
within the selected districts, we select certain towns and interview all banks in the
chosen towns. This would represent a three-stage sampling design. If instead of
taking a census of all banks within the selected towns, we randomly sample banks
from each selected town, then it is a case of using a four-stage sampling plan. If we
select randomly at all stages, we will have what is known as ‘multi-stage random
sampling design’.
40
PHT 422: Health Programme monitoring and Evaluation David Masinde
CHAPTER 4:DATA COLLECTION IN MONITORING AND EVALUATION
41
PHT 422: Health Programme monitoring and Evaluation David Masinde
further.
Questionnaires are
also Over-used and
people get tired of
completing them.
Questionnaires must
be piloted to ensure
that
questions can be
Understood and
cannot be
misunderstood. If the
questionnaire is
complex and will
need computerized
analysis, you need
expert help in
Designing.
Focus group In a focus group of This can be a useful It is difficult to do
discussion about six to 12 people way of getting random sampling for
are interviewed opinions from quite focus groups and this
together by a skilled a means findings may
interviewer/facilitator large sample of not be generalized.
with a carefully people Sometimes people
structured Interview influence one
schedule. another Either to say
Questions are usually something or to keep
focused around a quiet about
specific Topic or issue. Something. If
possible,
focus groups
interviews should be
recorded and then
transcribed. This
requires special
equipment and can
be
very time-consuming
Community This involves a Community Difficult to facilitate
meetings gathering of a fairly meetings are useful - Requires a very
large group of for getting a broad experienced
beneficiaries to whom response from many facilitator. May
questions, problems, People on specific require breaking into
situations are put for issues. small groups
input to help in It is also a way of followed by plenary
measuring Indicators. involving sessions when
beneficiaries everyone comes
directly in an together again.
42
PHT 422: Health Programme monitoring and Evaluation David Masinde
evaluation
process, giving them
a
sense of ownership
of
the Process. They
are
useful to have at
critical
points in
Community
projects.
43
PHT 422: Health Programme monitoring and Evaluation David Masinde
know, disagree, disagree statement and you
strongly with a cannot be sure
statement. whether
You can use pictures an opinion is being
and symbols in this given on one or the
technique if people other or both.
cannot read and write.
Critical This method is a way of Very useful when The evaluation team
event/incident focusing interviews with Something can end up
Analysis individuals or groups on problematic has submerged in a vast
Particular events or occurred and people amount of
incidents. The purpose feel strongly about contradictory detail
of doing this is to get a it. If all those and lots of "he
very full picture of what involved are said/she said". It can
actually happened. included, it should be difficult not to
help the evaluation take sides and to
team to get remain objective.
a picture that is
reasonably close to
what actually
happened and to be
able to diagnose
what went wrong
Participant This involves direct It can be a useful It is difficult to
observation observation of events, way of confirming, observe and
processes, relationships or otherwise, participate. The
And behaviours. Information process is very time-
"Participant" here provided in other consuming.
implies that the observer ways.
gets involved in
activities Rather than
maintaining a distance.
Self-drawings This involves getting Can be very useful, Can be difficult to
participants to draw particularly with explain and interpret.
pictures, usually of how younger Children.
they feel or think about
Something.
44
PHT 422: Health Programme monitoring and Evaluation David Masinde
related project activities.
Sources of Data
Documents
Recipients of services
Programme staff
Provision of Service
– Service statistics
Planned inputs and activities outlined in the work plan are compared with monitoring
information
– e.g. Compare number of planned trainings with those actually undertaken
– Minutes of meetings :Looks at decisions on the programme
– Financial records :Compares budget with expenditures
– Discussion with management : Gathers qualitative data about the overview of
the programme design and implementation. Clarify issues that are not clear
from the documents
– Interview with staff: Collect various types of information to
establish attitude and opinions on the programme and workload and to
get
insight on the workings of the programme
– Observation of service: When done using a specified set of criteria, directly
observing the provision of services gives an indication of the quality of those
services
– Survey of beneficiaries: Collects data to determine types and perceived quality
of services received
45
PHT 422: Health Programme monitoring and Evaluation David Masinde
CHAPTER SIX: DATA CAPTURE AND MANAGEMENT
Data capture The process of converting data from paper form into a format that can be interpreted easily
Data capture Steps
Receipt of forms
Editing
Querying
Imputation
Coding
Conversion
Verification
Validation
Receipt of forms
Evaluator should put strict reporting schedule in place. Schedule is used to check on:
Promptness
Completeness
Assign an ID number to each form
Editing
Procedure that ensures completeness and accuracy to detect missing, inconsistent,
Imputation
Procedure of assigning the most probable value to an item whose exact value is unknown.
It’s only used as a last resort.
Coding
Translation of an item into a numerical value.Various types of items
Items with numeric values
Pre-coded items“Unknown” or “not stated”
Open-ended
46
PHT 422: Health Programme monitoring and Evaluation David Masinde
Item Code
Community participation
1=mentioned; 0=not
mentioned
Conversion
Converting data from paper-based forms onto electronic media
Manual – keyboard data entry. Potential source of errors
Automated – specific hardware and software. Can also be used for some editing,
coding and imputation
Data entry screen
An electronic replica of the form/questionnaire designed by use of software packages –
EPINFO, EPIDATA, SPSS. It should contain
47
PHT 422: Health Programme monitoring and Evaluation David Masinde
Specify fields – numeric or alphanumeric
Specify no. of characters for each variable
Define each variable by a specific “variable name” - DICTIONARY
Assign value labels to each variable
Controls
Create controls to conform to data requirements. Examples:
Reject duplication
Reject codes outside those specified
Obey skip patterns
Data entry
Keying in data usually done manually. Its time consuming and a potential source of errors
hence the necessity for verification
Verification
Manual conversion. Forms are independently keyed in and results compared to original set. If
discrepancy exceeds a pre-set limit, the whole set of forms to be keyed in afresh. Sample or
100% verification depends on the level of error revealed
Validation
Process of reviewing data for range and illogical errors
Number of responses for each variable to be consistent – otherwise countercheck with
original forms by use of ID numbers
Where inconsistent, “clean” data (correct, impute, or delete)
DATA MANAGEMENT
Involves manipulating data in terms of:
Sorting
Indexing
Creating subsets
Grouping
Creating new variables
Merging files
The main goal of data management is to develop a comprehensive database that enables users
to readily access information
Data management Components
Storage
Create & save a file for the data
Develop and maintain a clear catalogue of filing
Prior to storage, scan for viruses
Maintain backup copies of all data files
Create working files & keep the original data set intact
Data security – ensuring data are kept safe from:
corruption
theft
misplacement
destruction
unauthorized access
Updating/maintenance
48
PHT 422: Health Programme monitoring and Evaluation David Masinde
Occasioned by changes in the software or tools used. Maintain both copies (before and after
changes) with appropriate version numbers
Analysis * ** ***
Graphics * *** **
49
PHT 422: Health Programme monitoring and Evaluation David Masinde
50
PHT 422: Health Programme monitoring and Evaluation David Masinde
CHAPTER SEVEN: DATA SOURCES & QUALITYASSURANCE
Definitions
Data - raw facts that are collected and form the basis for what we know
Information - the product of transforming the data by adding order, context, and
purpose
Knowledge - the product of adding meaning to information by making connections
and comparisons and by exploring causes and consequences
Sources of data
M&E systems incorporate data from more than one level of data collection (clients,
providers, facilities, population). Two categories of sources:
Routine: Data collected on a continuous basis Examples:
◦ Facility-based data (service statistics)
◦ Community based data (service statistics)
◦ Vital registration
◦ Sentinel reporting (surveillance)
51
PHT 422: Health Programme monitoring and Evaluation David Masinde
protected from deliberate bias or manipulation (political or personal reasons).
52
PHT 422: Health Programme monitoring and Evaluation David Masinde
It requires cross-functional cooperation
No specific unit or department feels it is responsible for the problem
It requires the agency to acknowledge that poor data quality is a problem
It requires discipline across the board
It requires an investment of financial and human resources
It is perceived to be extremely manpower-intensive
The return on investment is often difficult to quantify and hence justify
IV
Indicator Definitions Are there operational indicator definitions meeting
4 relevant standards and are they systematically followed
by all service points?
53
PHT 422: Health Programme monitoring and Evaluation David Masinde
1 Are there clearly defined and followed procedures to
0 periodically verify source data?
54
PHT 422: Health Programme monitoring and Evaluation David Masinde
REPORTING
FINDINGS RECOMMENDATIONS
LEVEL
No specific documentation
specifying data-management Develop a data management
roles and responsibilities, manual to be distributed to
reporting timelines, standard all reporting levels
forms, storage policy, …
55
PHT 422: Health Programme monitoring and Evaluation David Masinde
Develop a mechanism to ensure
The service points do not
that patients “lost to follow up”
systematically remove patients
are systematically removed
“lost to follow up” from counts
from the counts of numbers of
of numbers of people on ART
people on ART
56
PHT 422: Health Programme monitoring and Evaluation David Masinde
CHAPTER EIGHT: DATA ANALYSIS AND INTERPRETATION
Data Analysis-Definition
Further mathematical calculation to produce statistics about tabulated data. Its Manipulation,
summarisation and interpretation of data. Involves converting data into intelligible
information such as averages, frequency tables, sums or other statistics
Role of Data Analysis in M&E
Baseline surveys
Reveal participants’ characteristics in terms of age, sex, residence, educational level,
marital status, etc
Indicate frequency of specific behaviors, risks and protective factors
Monitoring and process evaluation
Reveal quality of program
Coverage and exposure
Program functions
Outcome and impact evaluation
Reveal if and how program achieved its intended results
Reveal what portion of changes in outcome your program can take credit for
Data Analysis can enable the following comparisons to be done:
Enable comparison between actual results vs program targets
Actual progress to projected time frame
Results across program sites
Program outcomes vs control or comparison group outcomes
Median
57
PHT 422: Health Programme monitoring and Evaluation David Masinde
Measurement with exactly half of measurements below and half above it
Median=(n+1)/2-th observation
E.g. given the scores: 2,3,4,5,7,7,9,10,12
Median=(9+1)/2= 5-th observation which is 7
The median is good for skewed distributions since it’s resistant to change
It’s not affected by outliers
Definition of variables
A variable is any characteristic or attribute of an element of a population that can be
measured in some form such as age, height, weight, religion, marital status, etc. An element
is an individual unit being examined capable of being measured in some form e.g. students,
women, youth, adolescents, men, children etc.
A dependent variable is the phenomena (happening) of interest that we are trying to
explain e.g. infant mortality( dead or alive), age at first marriage, use of contraception
(modern vs. other methods) etc.
An independent variable is one that explains the variation (change) in the dependent
variable e.g. sex, residence, marital status, etc.
Levels of Measurement
Variables can be measured in three levels:
-nominal scale
-ordinal scale
-interval scale
Nominal variables don’t have a natural ordering e.g. sex (male, female); marital
status (single, married, divorced/separated
Ordinal variables have ordered scales or natural ordering e.g. social class (low,
middle, high); education level (none, primary, secondary, college)
Interval variables have numerical distances between any two levels of the scale e.g.
age, income, weight, etc
58
PHT 422: Health Programme monitoring and Evaluation David Masinde
Data Analysis – Plan
Governing principles:
Prepare tabulations even if coverage is incomplete
Present simple definitions if the they differ from the standard
Prepare tabulations regularly and timely
Data Analysis – Results
Each table to be accompanied by an explanatory text
Include annotation in case of limitations on data quality
Where appropriate use figures, maps and graphs
Data Analysis in M&E
Much of analysis done in typical M&E is straightforward
Two types of statistics are used: descriptive and inferential statistics
Descriptive Statistics
Describe general characteristics of a set of data
Examples: frequencies, counts, averages and percentages
A frequency states a single numbers of observations or occurrences
Sex Frequency
59
PHT 422: Health Programme monitoring and Evaluation David Masinde
Male 10,628,368
Female 10,815,268
Total 21,443,636
Frequency Tables
Table shows that out of the total population of Kenya, males were 10,628, 368 while females
15-19 1705
20-24 1290
25-29 1040
30-34 836
35-39 691
60
PHT 422: Health Programme monitoring and Evaluation David Masinde
40-45 559
45-49 461
Total 6581
61
PHT 422: Health Programme monitoring and Evaluation David Masinde
– Coverage
– Utilization
Other measures
Quality
Efficiency
Coverage
Extent to which a programme reaches its intended target population, institution or
geographic area
Achieved when people in need of services enter the programme and receive the full
set of services which they need
Increasing coverage involves:
o bringing new clients into the programme; and
o retaining clients as long as needed to provide the intervention
Coverage would be computed as the program-level indicator (numerator) divided by
the population in need (denominator)
Utilization
Level of service/commodity uptake by clients
Level of utilization of programme services is a proxy measure of the investments by
the programme
Guided by number of clients served and respective targets already set by the
programme
Trends
Monitoring programme performance using multiple points in time
o How has the coverage been changing over time?
o How has the utilization been changing over time?
Useful concerning critical decisions of scale-up, replication or winding-up .
Quality
Focal point for facilities & programs
Influences clients’ willingness to:
– seek for services
– continue receiving services
Efficiency
A measure of how economically or optimally inputs are used to produce outputs
Program can be effective yet inefficient e.g. program may reach the targeted number
of expectant mothers but use more workers
Need to balance between efficiency and effectiveness
62
PHT 422: Health Programme monitoring and Evaluation David Masinde
CHAPTER NINE: MONITORING AND EVALUATION INFORMATION USE
Once reports have been received, they will need to be combed to give meaning to the
organizations effort. In designing M&E system, we must define how:
1) Reported findings will be utilized.
2) Additional benefits from findings such as feedback, knowledge and learning will be
derived from the reports.
Information obtained from the reports shall be shared within and without the
organization.
1) Identify the potential readers of the report. They may be some of the stakeholders you
identified in your evaluation plan.
2) Choose clear understandable language geared toward the report's primary audience.
63
PHT 422: Health Programme monitoring and Evaluation David Masinde
3) Based on this audience, prioritize the evaluation questions that you want to answer in
this report.
4) Structure the materials in a way that leads readers from the key findings to the details
of the project.
5) Gather enough information to explain details such as budgeting and planning. If you
`know that your readers may want to duplicate your evaluation, you need to add such
details to the report. These details should be mainly descriptive.
6) If preparing a formative or process evaluation report, provide enough details about
the operational aspects of the project (i.e., what was done and how).
7) If preparing a summative or outcome evaluation report, provide enough details about
how you interpreted the results and drew conclusions.
8) Use graphs, charts, tables, diagrams, and other visual techniques to display the key
results as simply as possible.
9) Always prepare an executive summary (see below) if the report is longer than 15
pages.
4) Summary of Results:
Present results of the qualitative and quantitative data analysis
Interpretation of Results:
Explain the interpretation of results including impacts on participants and staff,
effectiveness of services, sustainability of project activities, strengths and weaknesses
of the project, and lessons learned (see Module Three).
5) Connection to the Project Objectives:
Highlight the value and achievements of the project and the needs or gaps that the
project addressed.
6) Conclusions:
Describe, overall, (a) how your project objectives were met, (b) how the purpose of
the evaluation was accomplished, and (c) how the project evaluation was completed.
7) Recommendations:
Summarize the key points, make suggestions for the project's future and create an
action plan for moving forward. You may present recommendations in the following
parts:
Refer to the usefulness of the results for your organization, or for others, in areas such
as decision-making, planning, and project management.
64
PHT 422: Health Programme monitoring and Evaluation David Masinde
Refer to the project limitations, the assistance required, and resources that can
make future project evaluations more credible and efficient.
Describe the changes you would make to your project if you were to carry it
out again and the suggestions you have for other organizations that may want
to conduct a similar project.
Interim, based on
Management team monitoring analysis Written report, discussed at Management team
65
PHT 422: Health Programme monitoring and Evaluation David Masinde
meeting.
66
PHT 422: Health Programme monitoring and Evaluation David Masinde
Decision
making
DATA DAMAND
It is a measure of the value that the stakeholders and decision makers place on the
information, independent of their use of that information.
For the purpose of defining demand, it is required that:
Stakeholders actively and openly request information or
Demonstrate they are using information in one of the various stages of the DDIU
framework
… thus,
Data demand requires both of the following criteria:
The stakeholders and decision makers specify what kind of information they want to
inform a decision; and
The stakeholders and decision makers proactively seek out that information
DISSEMINATION
The process of sharing information or systematically distributing information or
knowledge to potential users and/or beneficiaries
Should produce an effective - use of information
Thus,
The goal of dissemination is use
STAKEHOLDERS
Beneficiaries
Implementers
Partners/Donors
Data collectors
Programme /Information managers
Analysts
Supervisors/Colleagues
Policy-makers
Private sector
WHY USE INFORMATION?
67
PHT 422: Health Programme monitoring and Evaluation David Masinde
Strengthen programs
Engage stakeholders
Ensure accountability and reporting
Advocate for additional resources
Inform policies
Contribute to global lessons learned
ESSENTIALS OF M&E INFORMATION
M&E information must
be manageable and timely
be presented according to the audience’s
Interest
capacity to understand and analyze
time, competing demands on time
have transparent quality
focus on activities, results of interest
focus on meaning and direction for action
68
PHT 422: Health Programme monitoring and Evaluation David Masinde
CHAPTER 10: ECONOMIC EVALUATION
Economic evaluation definition: The comparative analysis of alternative courses of action
in terms of both their costs and consequences The basic tasks of any economic evaluation
are;
♦To identify,
♦Measure,
♦Value,
♦Compare the costs and consequences of the alternatives being considered.
69
PHT 422: Health Programme monitoring and Evaluation David Masinde
achieve is to successfully remove a child’s tonsils then we might choose between,
say, a day case procedure or an inpatient stay. This is an issue of technical efficiency
since the output or ‘outcome’ is fixed, but the inputs will differ depending on which
policy we adopt. The day case approach may perhaps require more intensive staff
input and more follow-up outpatient visits. If this was the case, then inpatient
tonsillectomy may be the more technically efficient strategy.
Types of Economic Evaluation
Cost-Effectiveness Analysis
-When different health care interventions are not expected to produce the same outcomes
both the costs and consequences of the options need to be assessed.
-This can be done by cost-effectiveness analysis, whereby the costs are compared with
outcomes measured in natural units-for example, per life saved, per life year gained, and pain
or symptom free day.
-CEA is concerned with technical efficiency issues, such as:
What is the best way of achieving a given goal? or
What is the best way of spending a given budget?
-Comparisons can be made between different health programmes in terms of their cost
effectiveness ratios: cost per unit of effect.
-Under CEA effects are measured in terms of the most appropriate uni-dimensional natural
unit. So, if the question to be addressed was: what is the best way of treating renal failure?
Then the most appropriate ratio with which to compare programmes might be ‘cost per life
saved’.
-Similarly, if we wanted to compare the cost-effectiveness of programmes of screening for
Down’s syndrome the most appropriate ratio might be ‘cost per Down’s syndrome fetus
detected’.
-In deciding whether long-term care for the elderly should be provided in nursing homes or
the community the ‘cost per disability day avoided’ might be the most appropriate measure..
-If the outcomes of alternative procedures or programmes under review are the same, or very
similar, then attention can focus upon the costs in order to identify the least cost option-the
method of evaluation will be cost-minimization analysis.
-If, however, the outcomes are not expected to be the same, then both the costs and
consequences of alternative options need to be considered. Cost-effectiveness analysis is one
70
PHT 422: Health Programme monitoring and Evaluation David Masinde
method of economic evaluation that allows this to be done.
1. Measures of Effectiveness
-In order to carry out a cost effectiveness analysis it is necessary to have suitable measures of
effectiveness.
-These will depend on the objectives of the particular interventions under review.
-In all cost effectiveness analysis, however, measures of effectiveness should be defined in
appropriate natural units and, ideally, expressed in a single dimension.
-Common measures used in several studies have been “lives saved” and “life years gained”.
Several other measures of effectiveness have been used by different researchers these have
included see table below
2. Costs-Minimization Analysis
-Cost-minimization analysis is an appropriate evaluation method to use when the case for an
intervention has been established and the programmes and procedures under consideration
are expected to have the same or similar outcomes.
-In these circumstances, attention may focus on the cost side of the equation to identify the
least costly option.
-Cost –Minimization
Is concerned only with technical efficiency
Can be regarded as a narrow form of cost effectiveness analysis
Evidence is given on the equivalence of the outcomes of different interventions
As outcomes are considered to be equivalent no different decisions can be made
on the basis of costs
Advantages
71
PHT 422: Health Programme monitoring and Evaluation David Masinde
Simple to carry out, requires costs to be measured, but only that outcomes can
be shown to be equivalent
Avoids needlessly quantifying data
Disadvantages
Can only be used in narrow range of situations.
Requires that outcomes be equivalent
-CMA is really a special form of cost-effectiveness analysis, where the consequences of the
alternative treatments being compared turn out to be equivalent.
-In general Cost-effectiveness analysis (CEA) is:
Concerned with technical efficiency.
What is the best way of achieving a given goal with least resources?
What is the best way of spending a given budget?
Used when the interventions being compared can be analyzed with common
measures.
Advantage of the CEA approach
It is relatively straightforward to carry out
It is often sufficient for addressing many questions in health care
-Relatively simple to carry out.
Disadvantages of CEA approach
Since outcome is uni-dimensional, cannot incorporate other aspects of outcome into
the cost-effectiveness ratio.
Interventions with different aims/goals cannot be compared with one another in a
meaningful way.
Meanings of outcome measure not always clear, i.e. what is value of a case detected
in a screening programme.
May have situations when the option with the highest cost effectiveness ratio should
be chosen.
3. Discounting Benefits (in cost-effectiveness analysis)
-Costs incurred at different points in time need to be “weighted” or discounted to reflect the
fact that those that occur in the immediate future are of more importance than those that
accrue in the distant future. This raises the question: should the benefits or effects of
alternative procedures also be discounted? (For details about discounting refer to section
72
PHT 422: Health Programme monitoring and Evaluation David Masinde
three of this material)
-In answering this issue there is a difference among economists. If a zero discounting (no
discounting applied) were adopted, the main consequence would be to change the relative
cost effectiveness of different procedures.
-Using a positive discount rate means that projects with long lasting effects receive lower
priority. If a positive rate is replaced by a zero rate, procedures such as neonatal care-which
lead to benefits over the recipient’s entire future lifetime-will, become relatively more cost
effective. In practical terms, it is probably true to say that while the case for using a zero
discount rate for benefits has powerful intellectual and may gain empirical support in the
future, it will be too hasty to recommend that positive rates be discarded in economic
evaluations.
In general:
Cost-effectiveness analysis is a form of economic evaluation in which the costs of
alternative procedures or programmes are compared with outcomes measured in natural
units-for example, cost per life year saved, cost per case cured, cost per symptom free
day.
Effectiveness data are ideally collected from economic evaluations built in alongside
clinical trials. In the absence of dedicated trials researchers need to draw on the existing
published work.
Sensitivity analysis should be applied when there is uncertainty about the costs and
effectiveness of different procedures. This investigates the extent to which results are
sensitive to alternative assumptions about key variables.
There is debate among economists about whether benefit measures should be “time
discounted” in the same way as costs. If they are not, projects with long lasting effects
will become relatively more cost effective-for example, maternity services and health
promotion. But it will be probably wrong to recommend this as a standard practice.
Cost-Utility Analysis (CUA)
-CUA is concerned with technical efficiency and allocative efficiency (within the health care
sector).
-CUA tends to be used when quality of life is an important factor involved in the health
programmes being evaluated.
-This is because CUA combines life years (quantity of life) gained as a result of a health
73
PHT 422: Health Programme monitoring and Evaluation David Masinde
programme with some judgment on the quality of those life years.
-It is this judgment element that is labeled utility.
-Utility is simply a measure of preference, where values can be assigned to different states of
health (relevant to the programme) that represent individual preferences.
-This is normally done by assigning values between 1.0 and 0.0, where 1.0 is the best
imaginable state of health (completely healthy) and 0.0 is the worst imaginable (perhaps
death).
-States of health may be described using many different instruments which provide a profile
of scores in different health domains.
This approach of using utility is not restricted to similar clinical areas, but can be used to
compare very different health programmes in the same terms.
-As a result, ‘cost per QALY gained’ league tables are often produced to compare the relative
efficiency with which different interventions can turn resources invested into QALYs gained.
-It is possible to compare surgical, medical, pharmaceutical and health promotion
interventions with each other.
-Comparability then is the key advantage of this type of economic evaluation.
-For a decision-maker faced with allocating scare resources between competing claims, CUA
can potentially be very informative.
-However, the key problem with CUA is the difficulty of deriving health benefits. Can a state
of health in fact be collapsed into a single value? If it can then, whose values should be
considered in these analyses? For these reasons, CUA remains a relatively little used form of
economic evaluation.
When should CUA be used?
CUA should be used:
When health-related quality of life is the important outcome. For example, in
comparing alternative programmes for the treatment of arthritis, no programme is
expected to have any impact on mortality, and the interest is focused on how well
the different programmes will be improving the patient’s physical function, social
function, and psychological well being;
When the programme affects both morbidity and mortality and we wish to have a
common unit of outcome that combines both effects. For example, treatments for
many cancers improve longevity and improve long-term quality of life, but
74
PHT 422: Health Programme monitoring and Evaluation David Masinde
decrease quality of life during the treatment process itself.
When the programmes are being compared have a wide range of different kinds
of outcomes and we wish to have a common unit of output for comparison. For
example, if you are a health planner who must compare several disparate
programmes applying for funding, such as expansion of neonatal intensive care, a
programme to locate and treat hypertension, and a programme to expand the
rehabilitative services provided to post-myocardial infarction patients;
.When we wish to compare a programme to others that have already been
evaluated using cost-utility analysis.
When CUA should not be used?
When only intermediate outcome data can be obtained. For example, in a study to
screen employees for hypertension and treat them for one year, intermediate
outcomes of this type cannot be readily converted into QALYs for use in CUA.
When the effectiveness data show that the alternatives are equally effective in all
respects of importance to consumers (e.g. including side-effects). In this case, cost-
minimization analysis is sufficient; CUA is not needed;
When the effectiveness data show that the new programme is dominant; that is, the
new programme is both more effective and less costly (win-win). In this case, no
further analysis is needed;
When the extra cost of obtaining and using utility values is judged to be in itself not cost
effective. This is the case above in points 2 and 3. It would also be the case even when the new
programme is more costly than the old, if effectiveness data show such an enormous superiority
for the new programme that the incorporation of utility values could almost certainly not change
the result. It might even be the case with a programme that is more costly and only somewhat
more effective, if it can be credibly argued that the incorporation of any reasonable utility values
will show the programme to be overwhelmingly cost-effective.
Measuring Quality
Measuring a person’s quality of life is difficult. Nonetheless, it is important to have some
means to have for doing so since many health care programmes are concerned primarily with
improving the quality of a patient’s life rather than extending its length. Various quality of
life scales have been developed in recent years.
The Nottingham health profile is one quality of life scale that has been used quite widely in
75
PHT 422: Health Programme monitoring and Evaluation David Masinde
Britain. This comprises of two parts.
The first measures health status by asking for yes or no responses from patients
to a set of 36 statements related to six dimensions of social functioning:
a) Energy,
b) Pain,
c) Emotional reactions,
d) Sleep,
e) Social isolation,
f) Physical mobility.
These responses are then “weighted” and a score of between 0 and 100 is assigned to
each dimension.
The second part asks about seven areas of performance that can be expected to be
affected by health:
Employment,
Looking after the home,
Social life,
Home life,
Sex life,
Hobbies,
Holidays.
The Nottingham health profile has been applied, for example, in studies of heart
transplantation, rheumatoid arthritis and migraine, and renal lithotripsy.
Other quite widely used measures include the sickness impact profile and the quality of
wellbeing scale
Rosser Index
-Rosser and her colleagues described health status in terms of two dimensions: disability and
distress.
-The states of illness are classified into eight categories of disability and four categories of
distress.
-By combining these categories of disability and distress 32 (8 times 4), different states of
health were obtained.
-Rosser then interviewed 70 respondents (a mixture of doctors, nurses, patients and healthy
76
PHT 422: Health Programme monitoring and Evaluation David Masinde
volunteers) and, by using psychometric techniques sought to establish their views about the
severity of each state relative to other states.
-The final results of this exercise were expressed in terms of a numeric scale extending from
0 = dead to 1 = perfect health.
-With this classification system it becomes possible to assign a quality of life score to any
state of health as long as it is placed in an appropriate disability or distress category.
Quality – Adjusted Life – Years (QALY)
-One of the features of conventional CUA is its use of the QALY concept; results are
reported in terms of cost per QALY gained QALYs: -combine life years gained with a
measure of the quality of those years.
-Quality is measured on a scale of 0 to 1. With 0 equated to being dead and 1 equated to the
best imaginable state of health.
Combine all dimensions of health & survival into a single index.
-Cu ratio = cost A – cost B
QALYA– QALYB
Cost-Benefit Analysis
-Cost benefit analysis is the most comprehensive and theoretically sound form of economic
evaluation and it has been used as an aid to decision making in many different areas of
economic and social policy in the public sector for more than fifty years.
77
PHT 422: Health Programme monitoring and Evaluation David Masinde
-CBA estimates and totals up the equivalent money value of the benefits and costs to the
community of projects to establish whether they are worthwhile. These projects may be dams
and highways or can be training programmes and health care systems.
-The main difference between cost-benefit analysis and other methods of economic
evaluation that were discussed earlier in this series is that it seeks to place monetary values
on both the inputs (costs) and outcomes (benefits) of health care.
-Among other things, this enables the monetary returns on investments in health to be
compared with the returns obtainable from investments in other areas of the economy.
-Within the health sector itself; the attachment of monetary values to outcomes makes it
possible to say whether a particular procedure or program offers an overall net gain to society
in the sense that its total benefits exceed its total costs.
-CBA requires programme consequences to be valued in monetary units, thus, enabling the
analyst to make a direct comparison of the programmes incremental cost with its incremental
consequences in commensurate units of measurement, be they dollars, or pounds.
-CBA compares the discounted future streams of incremental programme benefits with
incremental programmes costs; the difference between these two streams being the net social
benefit of the programme.
-CBA goal of analysis is to identify whether a programme’s benefits exceed its costs, a
positive net social benefit indicating that a programme is worthwhile.
-CBA is a full economic evaluation because programme outputs must be measured and
valued. In many respects CBA is broader in scope than CEA/CUA..
-CBA is broader in scope and able to inform questions of allocative efficiency, because it
assigns relative values to health and non-health related goals to determine which goals are
worth achieving, given the alternative uses of resources, and thereby determining which
programmes are worthwhile.
Both costs and benefits are assigned a monetary value. The benefits of any
intervention can then be compared directly with any costs incurred.
If the value of benefits exceeds the costs of any intervention, then it is potentially
worthwhile to carry that intervention out.
If society funds projects for a given budget, then it can maximize the benefits
generated by social spending. It is concerned with allocative efficiency. It is
concerned with the question, is a particular goal worthwhile. Potentially it can
78
PHT 422: Health Programme monitoring and Evaluation David Masinde
answer questions such as should extra money be used for heart transplants or
improving housing.
Method requires that all resources and benefit generated by an intervention need
to be assigned a monetary value. Therefore, needs to cost things which have no
market value, i.e., changes in health, quality of life, length of life, pain, etc.
79
PHT 422: Health Programme monitoring and Evaluation David Masinde