56 Eval and Impact Assessment
56 Eval and Impact Assessment
56
A U.S. Department of State/NISCUP Funded Partnership among the University of Washington-Evans
School of Public Affairs, The Siberian Academy of Public Administration, and the Irkutsk State University
Evaluation and impact assessment are essential components to improving the operational
efficiency of any microfinance organization. This module offers practical guidance for
evaluating organizational performance using both qualitative and quantitative data. Evaluation is
a critical component in the development and growth of any organization, but it is especially
important in the microfinance field. A key component of evaluation is impact assessment –
assessing the impact of microfinance on the client. When conducting evaluation and impact
assessments, microfinance organizations (MFOs) want to know if their activities are furthering
their mission and if they are operating in the most efficient and effective way – for clients, staff,
and for donors. By the end of this module, students should have a basic understanding of how to
conduct an overall program evaluation. They will also be equipped to assess the impact of
microfinance on borrowers.
This module is divided into two sections. The first defines program evaluation and impact
assessment, along with important considerations for selecting an evaluation design. The second
section offers detailed information on how to conduct an impact assessment.
2. What is an Evaluation?
Evaluation is a participatory process designed to determine how well a program or project has
accomplished its goals. Evaluation is always based on the examination of some established,
empirical variable or indicator, and how current practices compare to that standard. The results
of evaluation provide managers with information about whether to expand a program, to
1
continue a program at its current level, to reduce spending, or to cut it entirely.1 The term
“evaluation” describes different models that suit different purposes at different stages in the life
of a project. Outside consultants are often hired to conduct a formal program evaluation of a
microfinance organization. As a result, evaluation has frequently been viewed as an external
imposition – a process that is not very helpful to project staff. Program staff can also conduct an
internal program evaluation, however. When conducted appropriately, evaluation should be a
tool that not only measures success, but can contribute to it, as well.2
Microfinance is believed to be an important tool in the fight against poverty. In recent years
there has been a huge growth in the number and size of microfinance organizations and their
clients around the world. Before the early 1980s, only a handful of MFOs existed, while today
these institutions number more than 7,000 worldwide.3 As the numbers of MFOs grows,
evaluation is an essential component in the development, maintenance, and performance of an
organization. It helps to ensure that the MFO is meeting its service mission and to demonstrate
measurable outcomes to donors. Self-financed MFOs, which are committed to covering the cost
of capital without any subsidization, rely on the market to evaluate their services. For these
MFOs, financial indicators document the stability and risk levels of the loan portfolio and serve
to evaluate the financial health of the organization. Evaluation, therefore, is a reflective process
requiring a critical look at business processes and organizational activity. Regular evaluation
measures progress toward a specific goal and is a vital component of any effort to manage for
results.4
2
4. Types of Evaluation
Were the appropriate staff members hired, trained, and are they working in accordance
with the proposed plan? Were the appropriate materials and equipment obtained?
Project staff should use implementation evaluation as an internal check to see if all the essential
elements of the project are in place and operating.
5
EHR/NSF Evaluation Handbook, p. 7, available at: < http://www.ehr.nsf.gov/rec/programs/evaluation/handbook/>.
6
Ibid.
3
The other aspect of formative evaluation is progress evaluation. This type of evaluation is used
to assess progress in meeting the project’s goals. Progress evaluation should be thought of as an
interim outcome measurement. Typically, a progress evaluation will measure a series of
indicators that are designed to show progress towards program goals. These indicators could
include participant ratings of training seminars or services provided through an MFO, opinions
and attitudes of participants and staff, as well as key financial indicators from the loan portfolio.
By analyzing interim outcomes, project staff eliminate the risk of waiting until participants have
experienced the entire treatment to assess outcomes.7
Financial performance indicators are a critical component of an MFO’s formative evaluation. See
Attachment 1 for a description of some of the main financial indicators that help MFOs
determine their financial health.
The results of an evaluation can be used broadly in an organization. The results are not only a
good source of ideas for organizational improvement, but also a source of information for the
organization’s stakeholders, such as the Board of Directors, donors, host government,
collaborators, clients or shareholders.
A large proportion of MFOs state poverty reduction as their mission, and have received donor
funding for this mission. Even MFOs primarily interested in financial self-sustainability may
have poverty reduction as part of their mission statement. At the most basic level, therefore,
there is a need to understand how or if MFOs are affecting borrowers. Impact assessments can
be used as “management tools for aiding practitioners to better attain program goals.”8
In addition to the benefits that an impact assessment offers an MFO, donors have an obligation to
ensure that they are making the right choices in terms of their objectives. MFOs are also
accountable to their funders – usually governments, shareholders and taxpayers, and therefore,
they have a strong interest in obtaining measures of the effectiveness of their funds. Donors may
7
EHR/NSF Handbook, p. 8.
8
Cohen, Monique and Gary Gaile, “Highlights and Recommendations of the Second Virtual Meeting of the CGAP
Working Group in Impact Assessment Methodologies,” April 14-28, 1998.
4
use impact assessment results to make resource allocation decisions for individual organizations,
or for broad strategic funding decisions. For self-sustainable MFOs, financial indicators provide
data on the loan portfolio and help facilitate outside investment and financial reporting. For
these MFOs, it is the market that ultimately decides whether the MFO stays or goes out of
business.
Specific impacts that may be looked for include health, nutrition, reproduction, child schooling,
income, employment, etc. In addition, practitioners may want to know if microfinance had any
impact on poverty, women, empowerment, and domestic violence.10 Social science evaluators
are challenged to construct impact assessments capable of measuring whether or not
microfinance contributed to any gains in individual or family welfare.
5
5. Types of Impact Assessments
When an impact assessment is planned, the type of assessment that should be used depends on
the needs of the various stakeholders. Determining these needs will define the type of tools and
impact assessment that should be performed. Below are the two most common types of impact
assessments.
As mentioned earlier, donors require some evidence that their money is being used to effectively
further their stated goals. A donor-led impact assessment examines the impact of a MFO from
the perspective of the lender. Results of a donor-led impact assessment are often shared with the
donor’s funders, which are usually government agencies or foundations. Future funding
decisions are often made based on this assessment.11
While donor-led assessments have been the most commonly conducted assessments, there has
been a shift towards practitioner-led assessments, which have a different focus. These
assessments focus more on “how the impact assessment process can fit into existing work
patterns, build on existing knowledge and experience, and produce results that can be easily used
by management.”12
According to David Hulme at the Institute for Development Policy and Management, donor-led
impact assessment methods can be thought of as needing to “prove impact,” while practitioner-
led impact assessment is meant to “improve practice” of an organization.13 Using the schematic
created by Hulme in Figure 5.1 below, you can begin to visualize this idea.
Figure 5.1
11
Improving the Impact of Microfinance on Poverty: Action Research Programme, available at:
<www.ids.ac.uk/impacts/stateart/stateart>.
12
Ibid.
13
Economists and statisticians will say that you cannot “prove” impact. Rather, one can only refute alternative
hypotheses.
6
PROVING<------------------------------------------->IMPROVING
IMPACTS PRACTICE
Effective evaluation depends on the ability of evaluators to establish linkages between the
changes identified during and after the course of the project, which are specifically
attributable to the intervention. It is not possible to conduct an evaluation without having
any measures or indicators. Therefore, three standards for impact assessments have been
established. They are credibility, usefulness and cost-effectiveness. This framework is
designed to be flexible enough to take into account different types of programs and
different contexts. It also recognizes that there are necessary tradeoffs with these
standards when conducting an evaluation.
7
6.1 Standards for Microfinance Impact Assessments
6.1.1 Credibility14
Credibility implies the trustworthiness of the entire evaluation process. It begins with clearly
stated objectives that indicate the type of impacts that will be examined, the intended use of the
findings, and the focused audience. The impact assessment should formulate a set of key
hypotheses and seek to test them using quantitative and qualitative studies. The evaluation
should establish and test a plausible relationship between the microfinance intervention and
changes as a result of participating in the program. Credibility can be improved by using data-
gathering instruments that are well designed and clearly documented.
6.1.2 Usefulness15
In order to be useful, an impact assessment must be designed to address the key questions and
concerns of the intended audience. The usefulness of the assessment is enhanced when those
who are expected to use the findings are involved in the planning, design, and analysis stage.
Involvement by the main users of the data helps to ensure that their concerns are reflected in the
impact assessment process. A key element of usefulness is the timeliness of the data. Impact
assessment data also can be used to define strategic objectives, design and deliver appropriate
products, and suggest new products. Finally, it can be useful for developing strategies to
improve portfolio performance by reducing turnover, expanding outreach, and improving
portfolio quality.
Designing a worthwhile impact assessment that provides the donor or practitioner with valuable
information is often challenging, particularly when working with a limited operating budget.
According to Carolyn Barnes and Jennefer Sebstad, an impact assessment can be more cost-
effective if there is a good “fit” between the objectives, methods, and resources available to those
14
Cohen, Monique and Gary Gaile. Highlights and Recommendations of the Second Virtual Meeting of the CGAP
Working Group in Impact Assessment Methodologies. April 14-28, 1998
15
Guidelines for Microfinance Impact Assessments” Discussion Paper for the CGAP 3 Virtual Meeting October 18-
29, 1999 (March 2000) by Carolyn Barnes and Jennefer Sebstad , AIMS, Management Systems International.
16
Ibid.
8
assessing impact. If possible, relying on the successes or failures of past impact assessments can
help in creating greater efficiency for an MFO. By learning which methods worked well and
which did not, evaluators can learn from past mistakes to achieve greater efficiencies in the
evaluation. These past experiences, or other examples in the literature, can be especially helpful
in identifying meaningful and valid impact hypotheses and variables, developing data-collection
strategies for obtaining reliable and valid data, and selecting appropriate analytical techniques.
7. Evaluation Design17
Under the best circumstances a well-designed evaluation enables the evaluator to say with
confidence that the program impacts were due to the program interventions and not some series
of outside factors, which happened to coincide at the same time. The ability of the evaluator to
do this is based on the internal and external validity of the evaluation. Internal validity is the
accuracy in concluding that the outcome of an experiment is due to the intervention. External
validity is the extent to which the result of an intervention can be generalized. Good evaluation
design controls for external factors that comprise threats to validity.
9
Sensitization – sensitization due to the pre-test or any part of the evaluation
It is not possible to control for all outside factors when conducting an evaluation. However, it is
possible to increase internal validity by randomly selecting participants, randomly assigning
them to groups and using a control group. A control group does not receive the intervention (for
our purposes, they would not have participated in the microfinance program) so that the effect of
the intervention can be determined on the test group, which has received the intervention.
External validity can be improved by careful adherence to good experimental research practices.
Evaluators can chose from several types of evaluation designs to minimize threats to internal and
external validity. In choosing the right evaluation design, evaluators will typically try to find a
balance between internal and external validity, and cost-effectiveness. Figure 7.1 presents an
overview of experimental design.
This type of evaluation design controls for very few external factors. Pre-experimental designs
usually focus on a single program and attempt to discover whether or not it has achieved its
objectives. As a result, it is difficult to draw strong conclusions about the impact resulting
directly from the program, because evaluators cannot be confident if changes were caused by the
intervention, or by a host of external factors. Even though pre-experimental design evaluations
lack the sophistication of more advanced evaluation research, they are inexpensive and easy to
use. Often evaluators are faced with situations where there is no baseline data and no control
group. In these cases, pre-experimental design affords the best possible evaluation under the
circumstances.
This design compares the same subjects before and after the program intervention. There is no
random selection of participants to the test group and no control group in this design. The
10
before/after comparison is represented visually below, where X is the intervention (microfinance
project) and O is each observation.
O1 X O2
This design is simple to use and inexpensive, but it does not control for threats to validity.
Because there is no control group, it is difficult to determine if any changes to the clients were a
result of the intervention, (i.e., microfinance training seminar), or some other factor, such as
maturation.
This design is used when the evaluator has no baseline data on the participants. The evaluator
simply measures and observes how program participants score after the program has operated for
a set period of time. This design is represented visually below, where X is the intervention
(microfinance project) and O is the observation
X O1
The major limitation of this design is that the lack of an initial observation leaves little ground to
infer whether the participants changed as a result of the microfinance intervention. There is no
random selection of participants. Evaluators, however, can strengthen this design by developing
11
population?
Random
assignments of No No Yes
subjects to groups?
Random assignment
of treatments to No No Yes
groups?
Degree of control
over external None Some Yes
factors?
Quasi-experimental design is one that looks like an experimental design, but lacks the key
ingredient – randomization. These types of evaluation designs are an improvement over pre-
experimental designs because they attempt to control for and minimize threats to validity. In
almost all cases, quasi-experimental designs incorporate control groups and thus, provide some
standard of comparison to determine if the intervention (i.e. microfinance loan or training) had
the desired impact. Some common characteristics of quasi-experimental designs include:
One of the intended purposes for doing quasi-experimental research is to capture events over an
extended period of time, as well as a large number of different events to control for various
threats to validity.
The times series design helps to overcome the weakness of the one-shot, before/after pre-
experimental design. In this design, the evaluator looks at the data over a longer period of time,
both before and after the intervention. The idea with time series data is to understand whether
12
the program caused a break in a more established pattern of trends, such as income levels or
education and literacy levels.
O1 O2 O3 X O4 O5 O6
O1 O2 O3 O4 O5 O6
The graphic above is a representation of a multiple time series design. A matching control group
is selected and observations are made to both the control and experiment group. Selection of
participants in a random manner is critical to the validity of this design. If gains are made in the
experiment group which do not show up in the control group, the evaluator needs to eliminate all
other plausible explanations before determining if the intervention (microfinance) caused the
change.
This design is typically used when it is not possible to randomly assign subjects to groups, for
political, ethical or other reasons. For example, it may not be possible to deny some clients a
microfinance loan, while giving it to others just for the purpose of creating control group. With a
non-equivalent control group, an attempt is made to find a control group, which although not
randomly selected, has similar socio-economic characteristics. This design is represented
visually below, where X is the intervention (microfinance project) and O is the observation.
O1 X O2
O3 O4
Evaluators using this design collect pre and post-test data and examine the results. This allows
the evaluator to compare the two groups: if no differences are found after the program, the
evaluator can be skeptical about the effects of the program. However, if there are differences, it
is important for the evaluator to determine if the differences are due to the program or some other
reason.
13
7.3 Experimental Design
True experimental design utilizes random selection and control groups to ensure internal validity.
It is often touted as the most rigorous of all research designs because the results are more “valid”
than other designs, i.e., the data from the experiment measures what it is intended to measure –
in this case the effect of credit for poor families. The key to success for experimental design is
the random assignment of participants to both the control and experiment groups. While no two
groups of people will ever be exactly the same, random selection of participants improves what
can be inferred from the intervention.
Randomization means that each member of the population being studied has the same probability
of being selected. This ensures that the population does not contain individuals with a
characteristic that could influence the experiments results. For example, individuals who choose
to become part of microfinance groups may have some difficult-to-measure characteristic, such
as personal risk levels or entrepreneurship. Therefore, to understand the impact of microfinance
on personal incomes it would be incorrect to only sample participants within the microfinance
groups. The result would overstate the actual impact of micro-credit on personal income as the
participants may be individuals whose incomes would have risen over time anyway because of
their entrepreneurship skills.
While experimental design may be the most desirable form of evaluation from a scientific
perspective, it is likely to be difficult to carry out in most real world context. In Russia, there are
numerous challenges associated with experimental design. These are described in Box 6.1.
Evaluation is often intrusive and time consuming and may not be possible in all social science
evaluations.
Evaluators using this design select a random group of participants and assign them to an
experimental and control group. Random selection is the process of drawing a sample of people
for the experiment. In order to obtain a random selection of MFO clients, an evaluator might
pick every 4th borrower from an alphabetical listing to get a random sample of 100 clients. 50 of
14
these clients could be randomly assigned in a similar manner to the experiment and control
group. In the following graphic, both the control group and the experiment group have been
randomly chosen and assigned.
O1 X O2
O3 O4
Participants receive the same pre-test and post test checks. The goal of the evaluator is to
determine if there were any differences in test results that could be attributed to the specific
intervention. Random selection helps to control for factors like selection, as well as maturation
and history.
Since some groups may show gains due to repeated testing, an evaluator might want to conduct
an experiment where no pre-test is given. In this case, participants are randomly selected to an
experiment and control group. In the graphic below, the intervention is administered
(microfinance program) and the results are tested to identify differences in outcomes for both
groups.
X O1
O2
The advantage of this design is that it is cheaper than a pretest/post-test design. The addition of a
randomly selected comparison group helps prevent threats to validity including history,
maturation, and testing effects.
During the summer of 2002, a group of Russian and American students conducted field research
in Novosibirsk and Irkutsk. The goal of the research was to better understand the demand for
and supply of capital in Siberia. While significant social science research has been conducted in
Moscow and St. Petersburg, the regions remain relatively unexplored, at least to western
audiences.
15
The Russian and American students set out to survey 500 individuals in both the Novosibirsk and
Irkutsk Oblasts. 300 of the surveys were designated for residents of the city, while 100 were for
villagers. An additional 100 surveys were to be filled out be small-business owners in either
cities or villages.
Using western survey methodology, the researchers ran into numerous difficulties, including:
* Randomized survey: Accurate city lists and telephone books are not available in many Russian
cities. Not all residents have telephones, especially in the villages. This led to the decision to
self-administer the survey.
* The non-response problem: Respondents were less willing to participate based on research
being an end in itself. Many people in the villages questioned the purpose of the survey and
wanted nothing to do with it. They were suspicious of the American students asking questions
and felt that the survey would not help them personally.
* Translation challenge: It was difficult to translate a survey with culturally specific terms like
“community,” “small-business owner,” and “civil society.” The Russian term for business, for
example, carries a negative connotation. Small business owners are defined differently in the
U.S. and Russia.
* Gender differences: It was difficult to locate men between the ages of 30-50 years who were
willing to answer the survey. Unemployment and alcoholism create problems for men,
especially in the villages. Women of all ages were more likely to answer the survey.
We have now discussed the importance and scientific rationale behind evaluation and impact
assessments, as well as the factors that should be considered when approaching an impact
assessment. Next we examine the steps involved in conducting an evaluation.
Cost effective evaluation and impact assessments employ methods that are appropriate to answer
the key questions to the degree and precision needed. The choice of the method to use for
18
Barnes and Sebestad, “Guidelines for Microfinance Impact Assessements.”
16
evaluation and impact assessment also depends on the purpose of the assessment and its
audience. The typical evaluation or impact assessment can be conducted using two basic
divisions of methods: qualitative and quantitative.
Common quantitative methods include sample surveys and semi-structured interviews. Common
qualitative methods include focus groups, case studies, individual interviews based on key open-
ended questions, participatory appraisals, and participatory learning activities. Qualitative
approaches can inform quantitative approaches and vice versa. The combination of qualitative
and quantitative approaches clearly can enhance an impact assessment. Table 8.1 below lists key
features and strengths of each of these methods and offer guidance in choosing the right
method.19 It is only one list of possible evaluation methods that could be conceivably employed.
Most likely the evaluation method will be a mix of the various methods discussed below.
Qualitative methods address questions related to how and why and are best at uncovering reasons
behind unanticipated results. Results are generalizable to theoretic propositions, not to
populations. Quantitative methods address questions related to what, who, where, how many,
how much, and the incidence or prevalence of a phenomena. The results may be generalizable to
a larger population, depending on the sampling technique used.
Table 8.1 Evaluation and Impact Assessment Methods and When to Use Them20
17
open ended develop a survey instrument - Users want timely participants in the process
questions information on specific issues or questions, such
as reasons for arrears - Time limited for
development and testing of a survey based
largely on close-ended questions - Users require
both quantitative data and information that helps
to explain the data
Focus groups - Priority is to understand motivations and - Need information based on representativeness and to
perceptions of clients - Want to stimulate generalize information from participants to a larger
reflection and discussion on client satisfaction or population - Want to understand the socioeconomic
group dynamics - Need information to interpret level of project participants - Local culture prevents
quantitative data - Need information quickly to individuals from feeling relatively free to express their
address an issue – Participatory principles are a opinions in a group situation
priority for the MFO - Need to understand the
causal impact
Case studies - Need to understand the causal impact process - - Need information about a large number of clients
Want to search out rival explanations for change and want to generalize the information gathered to a
and unexpected or negative impacts - Need to larger population - When indicators of program
illuminate and put a human face on quantitative impact are clear and easily measurable, and negative
data - Want to understand changes over a long impacts unlikely - When information is needed
period of time - Need to identify reasons for lack quickly
of change in survey impact variables
Participatory self - Program promotes participatory principles - - Do not have access to highly skilled persons to
learning Program’s objectives include empowering clients facilitate discussion - When planners not concerned
- Want to establish structure for linking about participants learning from the assessment
participatory learning to program improvement - process - Sole priority is standardized and statistically
Attention given to community impacts - representative data for a large and diverse program
Understanding client motivations and population - Existing tools inappropriate and do not
perceptions a priority have time to develop and adequately test new ones
Below is a brief outline of the steps that should be followed in both qualitative and quantitative
impact assessments.
Before an MFO begins an evaluation or impact assessment, there is generally a planning stage
that ensures it is approached in the most logical way. Some general recommendations for the
planning stage include:
When selecting a program to evaluate, be sure that it is relatively stable. This will
increase the likelihood that the evaluator will accurately examine the impact of the
program.21
21
Barnes and Sebstad
18
Clearly state the goals and targets of the evaluation or impact assessment. The
motivation for conducting the assessment will inform the process. In addition, the
intended audience (donors, government agency, MFO) should be made explicit as this
will drive the types of questions that are asked.
Identifying the type of evaluation design that will be employed is appropriate at this
stage. The choice of methods should be based upon the objectives of the evaluation, the
key research questions, the availability of resources (time, money and people), and the
degree of precision required and the possible application of results.22
State the key research questions. Again, these questions will be developed based on
what information is to be collected and the intended user. The type of research questions
will inform many of the other decisions that need to be made, including the cost of the
assessment, and may also help influence the type of methods mix that are selected.
At this early point in the process, consider whether control groups or a random sample is
possible. It is also useful to think about the sample size needed to conduct an assessment.23
Researchers require a large enough sample size to make sure that survey results closely resemble
those of the true population. If not enough people are sampled or the sampling is not conducted
randomly, then the survey will have a sampling error. An example of this would be conducting a
telephone survey during normal working hours, from 9 – 5. A significant portion of working
people would be missing from the survey thereby introducing error.
It is also essential to consider the unit of analysis – the individual, household, or institution.
When conducting an impact assessment based on microfinance, the individual borrowers or
households are likely to be the unit of assessment.24
22
Hulme, David. “Impact assessment methodologies for microfinance: A review,” Virtual meeting of the CGAP working group
on impact assessment methodologies (April 17 – 19, 1997).
23
The sample size depend on several variables, including how much sampling error can be tolerated, population
size, how varied the population is, and the smallest subgroup within the sample for which estimates are needed.
24
Chen, Martha Alter. A guide for assessing the impact of microenterprise services at the individual level. 1997. Microenterprise
Impact Project. Washington, D.C.
19
Anticipate the level of effort and timeframe needed to perform an evaluation. Allow
enough time for designing the assessment, staff training, data analysis, as well
interpretation and presentation of the results to various stakeholders.
After planning is complete, the next stage is the research design and implementation stages of the
evaluation or impact assessment. These sections are divided into two major branches:
qualitative and quantitative. The differences between these two approaches are emphasized in the
next section.
Qualitative methods generally involve interviews and other fieldwork that do not necessarily
result in uniform data that can easily entered into a computer and analyzed. When designing
qualitative research it is important to involve the person(s) requesting the evaluation, as well as
the key program managers and other stakeholders who will be using the information. This helps
make sure that the interview questions and other tools are addressing their concerns. In addition,
encouraging staff participation in the design of the assessment improves employee morale and
provides a sense of ownership of the process.25
25
Simanowitz, Anton, Susan Johnson, and John Gaventa, “Improving the impact of microfinance on poverty: How
can IA be made more participatory,” Imp-Act Working Papers, University of Sussex, June 2000.
20
Identify tools to conduct the qualitative portion of the impact assessment. For
example, these tools may include:26
Establish criteria for selecting participants. Select participants who are appropriate for
the questions that need to be answered. The ability to read and write may be a pre-
requisite for some qualitative methods, such as a written evaluation. The location of a
focus group meeting or interview may influence whether or not someone can
participate in evaluation.27
Design a data analysis plan. Once the qualitative data collection is completed, it will
be necessary to analyze the information. It is important to have a clear plan in place to
facilitate the analysis as soon as possible. The analysis plan ought to include a
framework for organizing the information from the specific records following a format
that reflects the underlying purpose of the qualitative study. This serves to summarize
information across a number of interviews, focus groups or participatory sessions.28
As mentioned above, quantitative methods focus on numbers rather than words. Depending on
the sampling technique and sample size used, the findings should be representative of the
population beyond the individuals involved in the study.
26
Impact assessment tools and methodologies. http://www.microfinancegateway.org/impact/guide4.htm
27
Barnes C., Sebstad J. (2000). Guidelines for Microfinance Impact assessments, Discussion Paper for the CGAP
Virtual Meeting October 18-29, 1999. AIMS, Management Systems International.
28
Barnes C., Sebstad J. (2000). Guidelines for Microfinance Impact assessments, Discussion Paper for the CGAP
Virtual Meeting October 18-29, 1999. AIMS, Management Systems International
21
When the goal of the impact assessment is to show that a program caused certain changes in the
target population, quantitative methods is an appropriate research design. Using quantitative
methods enables an organization to understand and track the impact of its programs across a
broad spectrum of clients.29 Quantitative techniques help to measure actual impacts of an MFO,
while qualitative methods are meant to tell help understand how and why these impacts are
occurring.30
There are several key steps involved in designing a quantitative evaluation or impact assessment.
Form the hypothesis: The hypothesis should be refutable, that is a statement made in
order to draw out and test its logical or empirical consequences.
Select the variables: In order to measure the hypothesis, variables need to be accurately
identified and measurable. Examples of variables include income, education levels,
marital status and number of children. Even a variable as straightforward as income will
have measurement issues. Is in-kind income counted, for example, such as home grown
food sources or barter activity?
Design the questionnaire: When designing the questionnaire, it is essential to keep the
intended audience in mind. Long and complicated questionnaires often frustrate
respondents. Some questionnaires use a mix of open- and close-ended. Include
standardized instructions for all field workers to establish some sense of uniformity to
the data collection and recording.31 It is best to refer to a survey book for instructions as
designing a useful survey is complicated.
Determine the sample: The larger the sample, the more confident evaluators can be that
the survey truly reflects the population. The sample size depends on two things:
confidence level and confidence interval. The confidence interval is the plus-or minus
figure usually reported in opinion polls. The confidence level is expressed as a
29
Ibid.
30
George, Clive. The quantification of impacts. www.enterprise-impact.org.uk/word-files/QuantitativeMethods.doc.
2001. Manchester, UK
31
Barnes C., Sebstad J. (2000). Guidelines for Microfinance Impact assessments, Discussion Paper for the CGAP
Virtual Meeting October 18-29, 1999. AIMS, Management Systems International
22
percentage and represents the percentage of the population within the confidence
interval.32
The next step in the evaluation process is the implementation stage. In order to ensure that the
evaluation goes smoothly, make sure that all logistics, such as transportation and materials are
organized. For example, a focus group experience can be improved when the participants feel
comfortable at a neutral location. When considering where to hold focus group meetings,
Sebstad and Barnes suggest the following:33
A setting that provides privacy for participants;
A location where there are no distractions;
A non-threatening environment;
A location that is easily accessible for respondents; and
A place where seating can be arranged to encourage involvement and interaction.
Conduct a training session for those who will serve as facilitators, interviewers and/or recorders
prior to the focus group. Training typically includes a mock interview to acquaint the
interviewer with the types of questions and answers that may arise.
The final step in the implementation process would be the initial analysis of the data, as well as a
write-up of the notes taken during the session. This should be done as soon as possible, to ensure
that the information is accurately recorded.
As with qualitative assessment methods, some initial steps prior to implementation help facilitate
the process. These include:
32
Please see the sample size calculator for further description at http://www.surveysystem.com/sscalc.htm.
33
Ibid.
23
Step 1: Plan logistics: This helps ensure research credibility, the timeline, and a cost effective
evaluation. Examples of things to think about ahead of time include:
Schedule and arrange transportation for the data collectors;
Coordinate the amount of time needed in each field site;
Locate a place to train the enumerators;
Arrange a place where core team members and enumerators can meet daily; and
Ensure that supplies (paper, pens, clipboards) and photocopying facilities are available.
Step 3: Train enumerators to understand the purpose and objectives of the survey, how to
introduce the survey to respondents, and the meaning of each question. They also should learn
best interview techniques and how to record the answers.
Step 4: Pilot test the questionnaire, with a small number of both clients and non-clients who are
not part of the sample, following (or as part of) enumerator training. Close monitoring and
coaching of the enumerators by core team members during the pilot test helps to ensure that the
questions are clearly understood by the data collectors and respondents, the survey is introduced
properly, and responses are recorded properly. A pilot test may reveal problems with the
questionnaire that require changes. After this, the questionnaire and the instruction manual can
be fine-tuned and, finally, photocopied for field use.
24
Step 5: Supervision of enumerators is critical for valid and reliable data. An impact assessment
core team member should meet with enumerators on a daily basis to review their work, discuss
problems, and provide feedback on the previous day’s questionnaires. This needs to be done
irrespective of whether or not the enumerators are program staff.
Step 6: Data entry should begin as early as possible, ideally after the pilot test of the
questionnaire. If a computer and electricity are available in the field, data entry can start after
the first day of survey work and problems with the questionnaire and/or enumerators can be
detected and rectified quickly. If data are being entered on more than one computer, it is
important to establish that the files can be merged early in the process. Involvement of core
impact assessment team members in data entry is an effective way for them to check the validity
of the data and facilitate coding. This also precludes the need to train and monitor data entry
personnel.
It is important to document problems encountered during the implementation stage and how they
were dealt with. This makes the process more transparent and provides a stronger basis for
establishing the credibility of the study.
Good analysis requires a dataset that is manageable in terms of quantity and ease of
manipulation. It further requires sufficient resources (time, money and people) that have been
planned for ahead of time.
The analysis of quantitative survey data should focus on the key research questions and
hypotheses and follow the initial data analysis plan. The first rounds of analysis should explore
descriptive data, such as averages, and frequency counts and distributions for all variables by key
analytic categories. The next round should involve simple statistical tests on data showing
differences in the characteristics of the sample and in the impact variables. Subsequent analysis
should look at clusters or dis-aggregations of interest (such as location, gender, and client
poverty levels) to determine the extent to which impacts vary by group. If the quantitative
25
survey data do not fit with the hypotheses, the analysis should proceed to explore what the data
do reveal.
The qualitative analysis should be guided by the initial plan, but additional information and
analysis that would illuminate the underlying purpose of the qualitative component should be
included. After summarizing the findings, the information should be organized and meaningfully
reduced by selecting and focusing on:
Analysis of the quantitative data is likely to reveal issues to explore further through qualitative
methods. Similarly, the qualitative findings also may suggest certain questions or variables to
probe further in the quantitative analysis. Triangulating the quantitative findings and the
qualitative findings deepens understanding of impacts and helps to establish the reliability of the
findings. Triangulation is a term borrowed from the study of experimental methods and refers to
any attempt to investigate a phenomenon using more than one method. It can be thought as a
way to crosscheck the research. By combining research methods, threats to the internal validity
of the data are reduced.
Presenting and interpreting quantitative and qualitative impact assessment findings should center
on key research questions, evaluation hypotheses, and questions that address rival hypotheses. It
is important to define key terms and concepts and avoid the use of overly technical terms.
Information on key features of the program and context should be woven in to illuminate and
interpret the findings.
26
Present the data clearly and accurately, and use averages complemented with
distributional data to provide a fuller picture and identify outliers;
Make it easy to see differences between clients and non-clients.
In presenting and interpreting findings from qualitative components, the purpose of the impact
assessment should guide the approach. In general, qualitative findings should be integrated as
appropriate with the quantitative data to illuminate and explain quantitative findings, address
questions that complement the quantitative findings, and provide a human face and tell a story. In
some cases discussion of qualitative findings may dominate, with the quantitative data used to
generalize these findings to a larger population.
In interpreting the findings, never draw conclusions without data to support them. Always
support conclusions with data.
8.8 Dissemination
27
more immediate feedback. In fact, having program staff directly involved in all of the stages is
one way to ensure immediate feedback. In addition, an impact assessment linked to a larger
program evaluation is likely to reach a wider audience.
For rapidly growing MFOs, staff resources will have to keep pace with growth and expansion
pressures. Rapid growth requires the hiring of more impact monitoring officers. As it moves
forward, MFO managers will have to continue fine-tuning the balance between the organization’s
evaluation and impact assessment needs and its staff resources.
Similarly, MFOs will have to seek the optimal balance between conducting in-house evaluations
and monitoring or outsourcing these functions to consultants. Bringing everything in-house is an
attractive, short-term way to lower costs.
Also, standards that help an MFO measure its performance are crucial. Financial audits can
compare internal financial practices against established external standards. Microfinance
organizations can benefit from a set of common standards for “good practice” that would help
assess impact performance just as they assess financial performance.
Finally, as the organization presents its findings to clients it should ask for feedback through a
combination of reports and focus group discussions. Maintaining a focus on clients’
participation and feedback is critical if the findings are to be useful to them as well as to other
decision-makers. Creating opportunities for client input and reflection throughout the impact
assessment and monitoring process is an important task for everyone involved.
Performance indicators may be used in an MFO formative evaluation. These indicators measure
the financial viability of an organization as well as its efficiency and productivity. Other
quantitative measures look at the MFOs client base, outreach and demographic information.34
Regardless of the specific performance standard employed by an organization, “the emphasis is
34
Performance Indicators for Microfinance Institutions
http://www.iadb.org/sds/doc/MSMTechnicalGuideOctober30JANSSON.pdf
28
on MFOs achieving minimum agreed performance standards and taking significant incremental
steps to improve performance.”35
Quantitative Indicators:
The largest source of risk for any financial institution (public of private) is its loan portfolio. The
loan portfolio is typically the largest asset of a microfinance organization, yet it is often quite
difficult to measure the risk that the loan portfolio represents for the MFO. Microfinance
organizations usually lack bankable collateral; therefore the quality of the loan portfolio is an
important tool for MFOs to demonstrate fiscal responsibility.36
There are several key ratios that microfinance managers must pay attention to from beginning to
end of the project. These are listed below in table 1.1.
35
http://www.swwb.org/English/2000/performance_standards_in_microfinance.htm
36
Performance Indicators for Microfinance Institutions http://www.iadb.org/sds/doc/MSMTechnicalGuideOctober30JANSSON.pdf
29
Table 1.1 Key Portfolio Ratios
Portfolio at Risk (PaR) Measures outstanding balance arrears as percentage of total portfolio
A microfinance loan is usually considered to be at risk if a payment is
PaR = (Outstanding more than 30 days late.
Balance on Arrears over A PaR greater than 10% is cause for concern.
30 days + Total Gross PaR is a more conservative measure of risk than other ratios.
Outstanding Refinanced PaR considers the risk that the entire amount of outstanding loan
(restructured) Portfolio) / balance will not be repaid.38 In other words, “it measures the complete
Total Outstanding Gross risk and not only the immediate threat.”39
Portfolio37
Provision Expense Ratio Used to determine and understand expenses than MFO can safely incur
as it anticipates future loan losses.
Provision Expense Ratio =
Loan Loss Provision Generally viewed together with PaR ratios to make determinations
Expenses / Average Gross about the portfolio quality.
Portfolio
Loan Loss Reserve Ratio Measure of an MFO’s accrued provision expenses, and a general
indicator of projected future loan losses.
Loan Loss Reserve Ratio
= Loan Loss Reserves / LLRR gives a rough estimate of the overall health and quality of loan
Total Outstanding Gross portfolio, but should be used in tandem with other indicators. 40
Portfolio It provides information on the amount of outstanding principal that an
MFO does not expect to recover. 41
Risk Coverage Ratio This ratio measures the percent of Portfolio at Risk that can be covered
by loan loss reserves.
Risk Coverage Ratio =
Loan Loss Reserves / By using this ratio, an MFO can determine if it is prepared for an
Outstanding Balance on emergency.
Arrears over 30 days + It is important that the Risk Coverage Ratio be analyzed along with the
Refinanced Loans) Portfolio at Risk measure, as well as any write-offs than an institution
is making, to accurately determine portfolio quality. 42
Write-Off Ratio A loan becomes a write-off when an organization determines that there
Write-Off Ratio = Write- is a small chance that it will be repaid.
Offs / Average Gross Writing off a loan does not usually impact total assets, net loan
Portfolio portfolios, expenses or net incomes.
Whether an organization writes off a loan or not varies widely. Some
will write-off loans regularly, while others may never write-off a loan.
Because write-off practices vary, the ratio is often considered along
with other ratios, to accurately assess the state of the portfolio. 43
37
Performance Indicators for Microfinance Institutions http://www.iadb.org/sds/doc/MSMTechnicalGuideOctober30JANSSON.pdf
38
Ledgerwood, Joanna, “Financial Management Training for Micro-Finance Organizations. Finance: Study Guide”
39
Performance Indicators for Microfinance Institutions http://www.iadb.org/sds/doc/MSMTechnicalGuideOctober30JANSSON.pdf
40
Ibid.
41
Ledgerwood, “Financial Management Training for Micro-Finance Organizations. Finance: Study Guide”
42
Performance Indicators for Microfinance Institutions http://www.iadb.org/sds/doc/MSMTechnicalGuideOctober30JANSSON.pdf
43
Performance Indicators for Microfinance Institutions http://www.iadb.org/sds/doc/MSMTechnicalGuideOctober30JANSSON.pdf
30
Even though a loan is written off, it does not mean that an organization
should stop pursuing the funds, if it is not cost-prohibitive to do so.
Efficiency and Productivity
Efficiency and productivity indicators provide information about the institution’s operations.
Efficiency indicators demonstrate whether an organization serves as many clients as possible,
while keeping costs low.44 Productivity indicators demonstrate the amount of output produced
per unit of input.
Efficiency rates of microfinance organizations are generally lower than traditional, commercial
banks, because microcredit is generally considered highly labor intensive. Further, “economies
of scale have much less impact on efficiency in MFOs than is usually believed because of the
high variable costs of the microcredit technology.”45
Like the performance indicators discussed above, there are several ratios used to evaluate both
the efficiency and productivity of a microfinance organization. They include the operating
expense ratio and borrowers per staff ration. Please see table 1.2 for a list of these ratios.
Borrowers Per Staff Calculating the number of borrowers per staff is another way to
Borrowers Per Staff = Number assess the productivity of the organization’s staff.
of Borrowers (excluding Consequently, the higher the value of the ratio the more productive
Consumer and Pawn Loans) / the organization is. In other words, it indicates how well the
Total Staff organization has developed and implemented procedures and
processes.
Low staff productivity does not always mean that they are working
less than they should be.
This could also be an indication of overly cumbersome and time-
consuming paperwork and other administrative tasks. 48
44
Ledgerwood, Joanna. Financial Management Training for Micro-Finance Organizations. Finance: Study Guide
45
Performance Indicators for Microfinance Institutions http://www.iadb.org/sds/doc/MSMTechnicalGuideOctober30JANSSON.pdf
46
Ledgerwood, Joanna. Financial Management Training for Micro-Finance Organizations. Finance: Study Guide
47
Performance Indicators for Microfinance Institutions http://www.iadb.org/sds/doc/MSMTechnicalGuideOctober30JANSSON.pdf
48
Ibid.
31
Financial Management
Financial management measures the organization’s ability to meet its obligations: namely to
provide clients with loans and to repay loans to the organization’s creditors. Decisions in this
area are very important and can directly affect the bottom line of the organization. For example,
careful financial management is necessary when making decisions about investment of an
organization’s funds. In addition, sound financial management is important for creditors and
other lenders who will be funding the organization’s operations.49
The following ratios in Table 1.3 are often used in assessing financial management:
49
Performance Indicators for Microfinance Institutions http://www.iadb.org/sds/doc/MSMTechnicalGuideOctober30JANSSON.pdf
50
Ibid.
51
Performance Indicators for Microfinance Institutions http://www.iadb.org/sds/doc/MSMTechnicalGuideOctober30JANSSON.pdf
52
Ledgerwood, Joanna. Financial Management Training for Micro-Finance Organizations. Finance: Study Guide
32
its funds poorly. A low liquidity ratio (<5%) often indicates that an
MFO has outgrown its funding sources and is facing a cash crunch.” 53
Debt / Equity Ratio The debt/equity ratio is important to the lender because it shows
how much “safety” in the form of equity there is in the
Debt/Equity Ratio
organization in the event it experiences a loss.
=Total Liabilities / Total
Equity Microfinance organizations traditionally have low debt to equity
ratios because their ability to borrow money is limited.
Changes in this ratio are often more telling about the financial
situation of an organization than the absolute ratio.
For example, if the debt to equity ratio increases, the
organization may be reaching its borrowing limits.
A high debt to equity ratio is less of a risk, however, if the
organization’s liabilities are largely from long-term funding
sources.54
Profitability
Profitability indicators are designed to summarize the overall performance of an organization.
Profitability indicators are often difficult to interpret because there are many factors that affect
profitability. If portfolio quality is poor or efficiency is low, profitability will also be affected.
Profitability indicators should not be analyzed and interpreted separately, as all performance
indicators tend to be of limited use if studied in isolation, and this is particularly the case for
profitability indicators. To understand how an institution achieves its profits or losses, “the
analysis also has to take into account other indicators that illuminate the operational performance
of the institution, such as operational efficiency and portfolio quality.”55
The following indicators are usually analyzed when profitability of an organization is measured:
53
Performance Indicators for Microfinance Institutions http://www.iadb.org/sds/doc/MSMTechnicalGuideOctober30JANSSON.pdf
54
Ibid.
55
Performance Indicators for Microfinance Institutions http://www.iadb.org/sds/doc/MSMTechnicalGuideOctober30JANSSON.pdf
33
it may not provide much information.
Significant and extreme income or losses can distort the true ratio.
So, it is important, when considering this ratio, to consider that multi-
year analysis may provide a more accurate picture of an organization’s
profitability.56
Return on Assets ROA measures how well a microfinance organization uses its assets. It
Return on Assets = Net does so by essentially measuring the profitability of an organization
Income / Average Assets and reflecting both the profit margin and efficiency of an organization.
57
Portfolio Yield Portfolio Yield shows how much interest an organization received from
Portfolio Yield = Interest its borrowers during a certain period of time.
and Fee income / Average Comparing the Portfolio Yield with the average effective lending rate,
Gross Portfolio an organization can learn how efficient it is in collecting fees.
In order for this ratio to be useful, it must be examined in light of the
average lending rate. Only by comparing these two indicators can an
organization understand its own operations relative to the environment
in which it operates.58
Demographic Information
34
These criteria are not used to determine whether an organization is doing better than another
organization. Rather, the information should be used to determine how best to maximize the
demographics of its clients, as an organization works towards financial self-sufficiency. 59
Qualitative Indicators
Does the MFO focus on serving customers that match the organization’s mission - for
example, those with low incomes? It is not necessary for this focus to be exclusive,
but it is important to consider that the mission of the organization is being fulfilled.
Is the organization using an individual approach when serving its customers?
Are your clients making attempts to save? If so, your clients could be depositing their
money into microsaving accounts sponsored by your organization that would allow
them to save for larger emergencies. Both your organization and your clients can
benefit from such a service, though additional research is necessary before you
establish this type of service.60
What is the method of outreach that the organization employs? Does this method
work with the stated goals and mission of the organization?
Do your clients generally rely on collateral substitutes? If so, do you have a method
in place for collecting repayment in case of default? Often times, the only course of
punishment that an organization can employ is denying future credit to clients. The
degree to which this punishment serves as a deterrent to default is dependent on the
59
CARE Savings and Credit Sourcebook. Chapter 11
60
Innovations in Microfinance: The Microfinance Experience with Savings Mobilization.
http://www.mip.org/pdfs/mbp/technical_note-3.pdf
35
needs of the clientele. The bottom line is that you should be considering what your
clients are using for collateral, and how this influences the ability of your
organization to continue providing services.61
Culture, structure, and systems of the organization’s work. This includes a stable
system of organizational management, freedom from political interference,
appropriateness to local conditions, competent employees, a stable business plan, and
precise determination of applicability and vision of perspectives.62
Adequate and complete systems in place, ensuring competitive management
information systems.
An appropriate methodology, that is also sustainable, and that permits the MFO to
effectively deliver its services63
Transparent financial reporting, that adheres to corresponding international standards
and that allows prospective donors to adequately measure the organization’s work.64
Sound business plan and accurate financial projections
Effective operating systems
36
However, financial viability is not the only organizational advantage. Other competitive
considerations revolve around the product mix. They include the following:
Market share and customer retention. What percentage of the market is receiving an
organization’s services? Are customers returning for additional loans?
Product mix: do new products complement or dilute existing ones?
Position in the marketplace: is it likely that the organization can maintain or even
enhance their position in the market?
37