Mesfin Assefa Tekelie Paper
Mesfin Assefa Tekelie Paper
Mesfin Assefa Tekelie Paper
BY
JIMMA UNIVERSITY
College of Business and Economics
Department of Accounting and Finance
JUNE, 2020
Addis Ababa, Ethiopia
THE EFFECTS OF MONITORING AND EVALUATION PRACTICE ON
PROJECT SUCCESS IN NGOS; A CASE STUDY ON PROJECTS FUNDED BY
COMPASSION INTERNATIONAL ETHIOPIA
By
And
Jimma University
College of Business and Economics
Department of Accounting and Finance
JUNE, 2020
Addis Ababa, Ethiopia
JIMMA UNIVERSITY
COLLEGE OF BUSINESS AND ECONOMICS
DEPARTMENT OF ACCOUNTING AND FINANCE
BY
______________________________ ________________________
Advsior signature
CERTIFICATE
This is to certify that the Thesis entitles ““The Effect of Monitoring and Evaluation practice on projects
success; A case of Compassion International Ethiopia financed projects’’ for the award of the Degree of
Master of Project Management and Financing (MPMF) and is a record of research work carried out by
Mr. Mesfin Assefa.
Under our guidance and supervision we hereby declare that no part of this Thesis has been submitted to
any other university or institutions for the award of any degree or diploma.
Page I
ACKNOWLEDGEMENT
This work comes into end not only by the effort made by the researcher but also the support of
many individuals and organizations. To begin with, I would like to express my sincere gratitude
and thanks to Mr. Abel Worku (Assistant professor) & Mr. Mohamed Getahun (Assistant
professor), for their constructive advices, support and helpful recommendations throughout the
course of this study, His down to earth personality, attention to details, deep knowledge,
constructive criticism, and continuous support as an example, I hope to match someday. Had it
been without their support, this work would not have been come in to reality. My credit and
gratitude also goes to my families to their support in giving valuable idea and much love.
Last but not least, I would like to express my great thanks to all the study participants in sharing
pertinent idea, information and providing different materials important for the thesis work.
Page II
Table of contents
Contents
DECLARATION ----------------------------------------------------------------------------------------------------- I
CERTIFICATE ------------------------------------------------------------------------------------------------------- I
ACKNOWLEDGEMENT ----------------------------------------------------------------------------------------- II
Table of contents --------------------------------------------------------------------------------------------------- III
List of tables ---------------------------------------------------------------------------------------------------------- V
List of figures-------------------------------------------------------------------------------------------------------- VI
Acronyms ----------------------------------------------------------------------------------------------------------- VII
Abstract ------------------------------------------------------------------------------------------------------------ VIII
Chapter One ----------------------------------------------------------------------------------------------------------- 1
1. Introduction -------------------------------------------------------------------------------------------------------- 1
1.1. Background of the Study ------------------------------------------------------------------------------------------- 1
1.2. Statement of the Problem ------------------------------------------------------------------------------------------- 4
1.4. General Objective ---------------------------------------------------------------------------------------------------- 5
1.4.1 Specific Objectives ---------------------------------------------------------------------------------------------- 5
1.5. Hypothesis ------------------------------------------------------------------------------------------------------------- 5
1.6. Limitation of the study ---------------------------------------------------------------------------------------------- 6
1.7. Scope of the Study ------------------------------------------------------------------------------------------------- 6
1.8. significant of the study ---------------------------------------------------------------------------------------------- 7
1.9. Organization of the Study ------------------------------------------------------------------------------------------ 7
CHAPTER TWO ----------------------------------------------------------------------------------------------------- 8
2. REVIEW OF RELATED LITERRTAURE------------------------------------------------------------------ 8
2.1. Theoretical Review -------------------------------------------------------------------------------------------------- 8
2.1.1 Monitoring and evaluation system --------------------------------------------------------------------------- 8
2.1.2 Monitoring and Evaluation in project management ---------------------------------------------------- 10
2.1.3 The Benefits of Monitoring and Evaluation for Organizations --------------------------------------- 11
2.1.4 Purposes of Monitoring and Evaluation for Public Organization---------------------------------- 12
2.1.5. Introducing the 10-Step Model for Building a Results-Based M&E System --------------------- 12
2.3. Empirical Review -------------------------------------------------------------------------------------------------- 14
2.3.1. Project Success ------------------------------------------------------------------------------------------------ 15
Page
III
2.3.2. Monitoring and Evaluation --------------------------------------------------------------------------------- 16
2.3.3. Project Life cycle stage -------------------------------------------------------------------------------------- 16
2.4. Conceptual Framework ------------------------------------------------------------------------------------------- 17
Figure. 1 Conceptual Framework ----------------------------------------------- Error! Bookmark not defined.
Chapter Three -------------------------------------------------------------------------------------------------------18
3. Research Design and Methodology ---------------------------------------------------------------------------18
3.1. Research Design --------------------------------------------------------------------------------------------------- 18
3.2. Data source --------------------------------------------------------------------------------------------------------- 18
3.3. Data Collection Procedure ------------------------------------------------------------------------------------- 20
3.4. Data Gathering Instruments ----------------------------------------------------------------------------------- 20
3.4.1. Questionnaire ------------------------------------------------------------------------------------------------ 20
3.4.2. Key Informant Interview---------------------------------------------Error! Bookmark not defined.
3.5. Target Population ------------------------------------------------------------------------------------------------ 19
3.6. Sample and Sampling Techniques --------------------------------------------------------------------------- 19
3.6. Method of Data Analysis ----------------------------------------------------------------------------------------- 20
3.7. Regression model ------------------------------------------------------------------------------------------------- 21
3.8. Data analysis technique ------------------------------------------------------Error! Bookmark not defined.
3.9. Reliability and Validity ------------------------------------------------------------------------------------------- 21
3.10. Ethical Consideration -------------------------------------------------------------------------------------------- 22
Chapter Four ---------------------------------------------------------------------------------------------------------23
Results And Discussions--------------------------------------------------------------------------------------------23
4.1. Data presentation and interpretation ----------------------------------------------------------------------- 23
4.2. Descriptive statistical Analysis -------------------------------------------------------------------------------- 23
4.2 .1.Demographic Characteristics of the Respondents ----------------------------------------------------- 23
4.4. Inferential Analysis----------------------------------------------------------------------------------------------- 30
Chapter Five ---------------------------------------------------------------------------------------------------------37
Summary, Conclusion and Recommendations ----------------------------------------------------------------37
5.1. Introduction -------------------------------------------------------------------------------------------------------- 37
5.2. Summary of Key Findings-------------------------------------------------------------------------------------- 37
5.3. Conclusion ---------------------------------------------------------------------------------------------------------- 38
5.4 Recommendations ------------------------------------------------------------------------------------------------- 39
References ---------------------------------------------------------------------------------------------------------- XL
Annexes ----------------------------------------------------------------------------------------------------------- XLII
Page IV
List of tables
Page V
List of figures
Figure2: Scatterplot………………………………………………………………………….…..34
Figure3: Histogram………………………………………………………………………………33
Page VI
Acronyms
SPSS: statistical package for social science
Page
VII
Abstract
Monitoring and evaluation of project is an integral part of the project cycle and good management
practice. An effective monitoring and evaluation system is fundamental if the goals of a project are to
be achieved. Through setting up proper monitoring and evaluation systems, planning, efficiency and
proper funds utilization can be achieved to enhance the performance of projects. The general objective
of the paper is to assess the effect of monitoring and evaluation functions in achieving project success.
To achieve the study objective, an explanatory research design along with mixed method approach has
been employed. Primary data were collected through survey questionnaire from 65 project staff
members who were selected using convenience sampling technique. Interview was also conducted with
senior management team members to triangulate the quantitative data obtained from survey with
regression. The findings of the study revealed that: poor practicing monitoring and evaluation system,
team incompetency, week program accountability and project life cycle stage are have far worst effect
on CIE funded projects. The study recommended that based on the analysis of CIE funded projects
should work on improving project success by paying attention to monitoring and evaluation
procedures particularly, prepare adequately work breakdown structures with the expected outcomes
to reducing project ineffectiveness and inefficiency. Using the standardize model of monitoring and
evaluation are contributing to the success of projects and increase the level of the employee
monitoring and evaluation continuous by providing relevant training programs. The study also found
that there is a significance relationship between each of the mentioned factors and the dependent
variable, project success. The researcher recommended that M&E tools should be part of the key
performance indicators where they have being accountable for taking actions or in actions.
Page
VIII
Chapter One
1. Introduction
This study has assessed the effect of monitoring and Evaluation of Compassion International
Ethiopia projects. Compassion International Ethiopia is one of the international NGO’s which
has been actively involved in Ethiopia in varieties of developmental and humanitarian activities
since 1993 G C. The research will shows the roll of monitoring and Evaluation of among the
selected Compassion International Ethiopia assisted projects success. Compassion International
has more than 470 projects, a round 110,000 children in Ethiopia, and serving the community in
holistic service, Compassion works primarily through child sponsorship but also have specific
initiatives to help babies and mothers, to develop future leaders, and to meet critical needs. CIE
annual report, (2016)
All of compassion assisted projects are serving children in Holistic service, which are education,
providing educational materials, school fees and Tutorials, In health provision, education of
prevention of disease, treatment during illness, in socio emotional provision children will get
different age related trainings and socio emotional educations with concerned professionals .The
different projects in different areas are designed to contribute to changes in Children’s lives.
Project and program level reports, monitoring reports, minutes of review meetings and
evaluations are used to validate the findings and recognize the role of monitoring and evaluation
in project success. The purpose of this research is to investigate the effect of monitoring and
evaluation practice in achieving project success in Compassion International Ethiopia. CIE
annual report (2016)
Page 1
According to the conceptualization of PMBOK, (2004) highlights various factors that may lead
to project success which includes creating right teams; involving stakeholders; preparing detailed
project scope; influencing stakeholders; information; managing expectation; communication;
negotiation; and monitoring and evaluation. This, therefore, implies that monitoring and
evaluation is one of the critical factors of project success. Equally, several studies have been
carried out focusing on the project success. For example, (Raymond and Bergeron, 2008)
identified several indicators of project success identified in the literature including “reduction of
the time required to complete a task, improved control of activity costs, better management of
budget, improved planning of activities, better monitoring of activities, more efficient resource
allocation, and better monitoring of the project schedule”. Project management is hence
acknowledged as being the most successful approach of managing changes brought about by
projects. This is because it has techniques and tools that enable control and delivery of the
project activities within given deliveries, timeframe and budget (Shapiro, 2011). M & E is one of
the tools that assist project managers track performance and also provide the management with
information to make decisions in regard to the project.
The Organization of European Co-operation for Development (2002) defines monitoring and
evaluation Context (situation) monitoring tracks the setting in which the project/program
operates, especially as it affects identified risks and assumptions, but also any unexpected
considerations that may arise. It includes the field as well as the larger political, institutional,
funding, and policy context that affect the project/program. For example, a project in a conflict-
prone area may monitor potential fighting that could not only affect project success but endanger
project staff and volunteers.
The Organization of European Co-operation for Development (2002) defines monitoring and
evaluation as; Monitoring is a continuing function that uses systematic collection of data on
specified indicators to provide management and the main stakeholders of an ongoing
Page 2
development intervention with indications of progress and achievement of objectives and
progress in the use of allocated funds. Evaluation, on the other hand is the systematic assessment
of an ongoing or completed project, program or policy, its design, implementation and results.
The aim is to determine the relevance and fulfillment of objectives, development efficiency,
effectiveness, impact and sustainability.
Globally, Australia is one of the leading countries in the world in embracing M&E systems in the
development projects (UNDP, 2002). The government created a fully-fledged government
evaluation system, managed by the Department of Finance (DOF). This provided a spending
baseline and freed up the budget process from a detailed, line item scrutiny of spending, to focus
instead on changes in government policy and spending priorities in the development projects.
The government of Australia advocated the principles of program management and budgeting,
with a focus on the efficiency and effectiveness of government programs, through sound
management practices, the collection of performance information, and the regular conduct of
program evaluation (Mackay, 2011). What should the M&E Plan include? The M&E Plan
should include an introduction and an indicator matrix table. The introduction should briefly
describe the aims and objectives of the program, methodologies used to obtain the data, the
planned interventions to be implemented, the critical assumptions underlying the intervention
and the anticipated critical hindrances that might have an effect on the project.
Government M&E systems in Africa operate in complex terrain. To some extent they are
hostages to other forces in government, nevertheless given a results-driven reform agenda,
incentives can be put in place for the evidence generated to support developments in delivery,
budgeting, and monitoring and evaluation are consistently designed to support valued change in
people‟s lives, particularly the underprivileged (Nabulu, 2015). In Kenya, the monitoring and
evaluation systems has not been that effective due to several challenges especially in the
government sector. In the year 2005, the then Ministry of Planning and National Development
commissioned work on the design of an appropriate framework for Monitoring and Evaluation
(M and E) in the National Development Program. This proposed Monitoring and Evaluation
framework has not been fully operational, for example in this view, is supported by Wanjiru
(2008) who indicated in her Social Audit of CDF that, monitoring and reporting should be
strengthened and deepened in all CDF projects.
Page 3
1.2. Statement of the Problem
Monitoring and Evaluation (M&E) provides government officials, development managers, the
public and private sector and civil society with better means for learning from past experience,
improving service delivery, planning and allocation of resources and demonstrating results as
part of accountability to key stakeholders (International Finance Corporation(IFC), 2008). It
brings institutional development, refers to the creation or the capacity of an institution to reflect
systematically and rigorously upon its role and function, and better enable them to carry out their
responsibilities. It reflects an attempt to introduce change and development in the way the
institution is organized so that it is better able to meet its mission (World Bank, 2005).
The performance projects depend on various factors. One of the key factors for project success
is having a good monitoring and evaluation system and practices. Project monitoring and
evaluation is an important element of the program management as it adds value to the overall
efficiency of project implementation by offering corrective actions to the variances to the
expected standard. Project Directors are required to undertake more exercising monitoring and
evaluation of projects and develop framework and guidelines for measuring performance.
Preliminary assessment of Compassion International Ethiopia and assisted projects shows that
Monitoring and Evaluation has a number of challenges. There is a Monitoring and Evaluation
system in different programs and at country office; however, the system is not efficient and
effective. In some cases, the project monitoring and evaluation system does not exist, projects
did not routinely monitored, the monitoring findings did not taken up by decision makers, the
project team did not follow up the translation of the findings in to practice, the evaluation
conducted are not well monitored for further improvement .
It is known that monitoring and evaluation is the back bone of the project performance, In
preliminary study no research is found in Compassion international Ethiopia monitoring and
evaluation system. So this research tries to identify the monitoring and evaluation system and
gaps of the method in this Organization.
Compassion International Ethiopia financed projects do not have monitoring and evaluation
officer at the project level. The projects beneficiary end result could not be evaluated with the
Page 4
donor compassion international Ethiopia. Based on the above problem, the research will address
the following research questions.
In response to the problem, the researcher proposes to make detail study to bridge the gap of
literatures within CIE funded projects, The gaps were identified by the researcher from the
research reviewed to find a study under the topic however; researcher couldn’t find much in
Ethiopian case or in the case of CIE financed projects. furthermore, the study which the
researcher reviewed in the case of international NGO’s are descriptive study and it doesn’t show
inferential analysis on the sector as well as cannot addressed the complex issues that limited our
understanding of emergent effects of monitoring and evaluation on project success factors in
CIE financed projects which are increasing affecting NGO’s poverty alleviation projects.
1.5. Hypothesis
HO: Monitoring and evaluation practice doesn’t has positive and significant effect on project success
HA: Monitoring and evaluation practice has positive and significant effect on project success
H0: Monitoring and evaluation competency doesn’t has positive and significant effect on project success
HA: Monitoring and evaluation competency has positive and significant effect on project success
Page 5
HO: Downward accountability doesn’t has positive and significant effect on project success
HA: Downward accountability has positive and significant effect on project success
HO: Project life cycle doesn’t has positive and significant effect on project success
HA: Project life cycle has positive and significant effect on project success
The study focused on factors affecting monitoring and evaluation practice in NGO’s project
success on the aspects of project management that affects the project success, but there are
various factors that causes as internal and external factors.
Lack of adequate resource for the research could also be taken as important limitation in this
research as increasing sample size or incorporating additional research designs require much
more resource both in terms of time, cost and quality.
The scope of the research is on projects handled in Addis Ababa city only, whereas there are
various projects that are being carried out of Addis Ababa city by CIE, Due to time and cost not
included, so for the future studies these projects should be included for vast view.
Page 6
Compassion International Ethiopia. Particularly, the research focused on program staff members
that have in-depth knowledge on both project management and monitoring and evaluation.
Page 7
CHAPTER TWO
2. REVIEW OF RELATED LITERRTAURE
2.1. Theoretical Review
2.1.1 Monitoring and evaluation system
Monitoring and Evaluation System Project monitoring and evaluation effectiveness is dependent
on the approach of monitoring and evaluation, the monitoring and evaluation competency,
downward accountability and sound involvements of monitoring and evaluation in project life
cycle. There are various monitoring and evaluation approaches that have been singled out
through literature review. The monitoring and evaluation approaches identified from the
literature are explained in the following paragraphs. Various monitoring and evaluation
approaches and tools have been used in the development sphere and have undergone changes in
parallel with dominant development paradigms in the development discourse Hummel Brunner,
R. (2010).
The main monitoring and evaluation approaches are currently based on the positivist and
constructivist paradigms. The former are linear, rigid and quantitative approaches, while the later
are more nonlinear and qualitative, allowing room for measuring complex process (Rogers
2012). Some believe that the combination of these methods can work best, while others insist
that fusion of these tools is not possible as they are completely different (Earl et al. 2001).
The Balanced Scorecard is another approach that can be employed in evaluating projects.
Balanced Scorecard evaluates projects on the basis of four perspectives which are the financial
perspective, customer perspective, Internal Business Process, and Learning & Growth. Alhyari et
al. (2013) found out that balanced score card approach fitted very well with monitoring and
measuring the performance of e-government in Jordan, and also in evaluating their success in IT
project investments. The balanced scorecards in INGO context of Ethiopia are rather the work of
ESAP (Anteneh, 2015, Ethiopian Social Accountability Program) part. Hence, the focus areas of
this research is to look at the role of monitoring and evaluation more specifically in relation to
project life cycle, accountability, monitoring and evaluation system and competency towards
achieving the success of the project.
Page 8
Logical framework (Log Frame) is one of the most common approaches used in project
management for both planning and monitoring of projects. Log Frame matrix is a tool that is
applicable for all organizations both governmental and nongovernmental that are engaged in
development activities (Middleton, 2005; Martinez, 2011). Hummel Brunner, R. (2010) further
confirms the continued use of Log Frame despite several criticisms. He asserts that Log Frame’s
Approach has not been fundamentally weakened by critics.
Even though many donors acknowledge its limits and weaknesses, they still maintain its use as a
planning and monitoring tool. Myrick (2013) expresses that a pragmatic approach to monitoring
and evaluation is ideal however in the real world practitioners may be limited by constraints that
will prevent their continued use of either a log frame or some overly pragmatic approach to M &
E. Myrick (2013) further explains that whatever the approach used, at least the basic principles
for monitoring and evaluation which are measureable objective, performance indicator, target
and periodic reporting should be used in a reporting tool. The advantages of a Log frame include
simplicity and efficiency in data collection, recording and reporting. However, the Log Frame
has faced the following criticism around its linearity, rigidity and stifling of creative and
innovative working system.
Conditions and efforts have to be made to modify the logical framework through inclusion of
more participatory learning elements. Hence, this study will try to look at what monitoring and
evaluation practices help to measure the outcomes and impact correctly which consequently
contribute to the project success, Myrick (2013).
Development activities (Middleton, 2005; Martinez, 2011), Hummel Brunner, R. (2010) further
confirms the continued use of Log Frame despite several criticisms. He asserts that Log Frame’s
Approach has not been fundamentally weakened by critics. Even though many donors
acknowledge its limits and weaknesses, they still maintain its use as a planning and monitoring
tool. Myrick (2013) expresses that a pragmatic approach to monitoring and evaluation is ideal
however in the real world practitioners may be limited by constraints that will prevent their
continued use of either a log frame or some overly pragmatic approach to M & E. Myrick (2013)
further explains that whatever the approach used, at least the basic principles for monitoring and
evaluation which are measureable objective, performance indicator, target and periodic reporting
should be used in a reporting tool.
Page 9
The advantages of a Log frame include simplicity and efficiency in data collection, recording
and reporting. However, the Log Frame has faced the following criticism around its linearity,
rigidity and stifling of creative and innovative working system. Conditions and efforts have to be
made to modify the logical framework through inclusion of more participatory learning
elements. Hence, this study will try to look at what monitoring and evaluation practices help to
measure the outcomes and impact correctly which consequently contribute to the project success.
However, good intentions, large program and projects, and lots of financial resources are not
enough to ensure that development results will be achieved. The quality of those plans, program
and projects, and how well resources are used, are also critical factors for success.
To improve the chances of success, attention needs to be placed on some of the common areas of
weakness in program and projects. Four main areas for focus are identified consistently:
1. Planning and program and project definition—Projects and programs have a greater chance of
success when the objectives and scope of the program or projects are properly defined and
clarified. This reduces the likelihood of experiencing major challenges in implementation.
Page
10
as information on progress and performance. This clarity helps to ensure optimum use of
resources.
4. Monitoring and evaluation—Program and projects with strong monitoring and evaluation
components tend to stay on track. Additionally, problems are often detected earlier, which
reduces the likelihood of having major cost overruns or time delays later.
Good planning combined with effective monitoring and evaluation can play a major role in
enhancing the effectiveness of development program and projects. Good planning helps us focus
on the results that matter, while monitoring and evaluation help us learn from past successes and
challenges and inform decision making so that current and future initiatives are better able to
improve people’s lives and expand their choices.( UNDP, 2009&2010)
According to sarah , (2006) Evaluation work can: Improve effectiveness in the way your
organization meets local needs; Identify areas for improvement in your service to users; Attract
resources; Help share learning and experience across the organization; Improve accountability to
users, members and funders; Give greater work satisfaction for all managing body members and
staff; Volunteers; Celebrate progress and achievement; Identify changes or new directions; Make
the case for new resources.
Monitoring and evaluating program or project performance enables the improved management
of the outputs and outcomes while encouraging the allocation of effort and resources in the
direction where it will have the greatest impact. M&E can play a crucial role in keeping projects
on track, create the basis for institutional learning and create an evidence base for current and
future projects through the systematic collection and analysis of information on the
implementation of a project (IFC, 2008).
Page
11
2.1.4 Purposes of Monitoring and Evaluation for Public Organization
If organizations are to carry out effective M&E around capacity building, a key first question to
address is ―what is the purpose of that M&E? The usual answer to this is a combination of
accountability and learning in order to improve performance (Nigel S & Rachel S, 2010).
Monitoring and evaluating organization practices are necessary to improve and enhance the
quality of existing programs; NGOs are facing increasing requirements to provide evidence to
support their performance.
According to McDonald (2003), monitoring and evaluation helps organizations to: Assess
efficiency and effectiveness of a program; Refine and improve an existing program; Decide
whether to continue or replicate an initiative; Contribute to the established evidence base; and
Justify the program or initiative and to help procure further funding. For these reasons, it is
important that organizations devote resources towards improving their monitoring and evaluation
process, as well as their capacity. (Eccles &Gootman, 2002).
2.1.5. Introducing the 10-Step Model for Building a Results-Based M&E System
Although experts vary on the specific sequence of steps in building a results-based M&E system,
all agree on the overall intent. For example, different experts propose four- or seven-step models.
Regardless of the number of steps, the essential actions involved in building an M&E system are
to: Formulate outcomes and goals, Select outcome indicators to monitor, Gather baseline
information on the current condition, Set specific targets to reach and dates for reaching them,
Regularly collect data to assess whether the targets are being met and Analyze and report the
results. (New Delhi,pp. 24–31)
Given the agreement on what a good system should contain, why are these systems not part of
the normal business practices of government agencies, stakeholders, lenders, and borrowers?
One evident reason is that those designing M&E systems often miss the complexities and
subtleties of the country, government, or sector context. Moreover, the needs of end users are
often only vaguely understood by those ready to start the M&E building process. Too little
emphasis is placed on organizational, political, and cultural factors.
Step 1. Throughout, the model highlights the political, participatory, and partnership processes
involved in building and sustaining M&E systems, that is, the need for key internal and external
stakeholders to be consulted and engaged in setting outcomes, indicators, targets, and so forth.
Page
12
Step 2 of the model involves choosing outcomes to monitor and evaluate. Outcomes show the
road ahead.
Step 3 involves setting key performance indicators to monitor progress with respect to inputs,
activities, outputs, outcomes, and impacts. Indicators can provide continuous feedback and a
wealth of performance information. There are various guidelines for choosing indicators that can
aid in the process. Ultimately, constructing good indicators will be an iterative process.
Step 5 builds on the previous steps and involves the selection of results targets, that is, interim
steps on the way to a longer-term outcome. Targets can be selected by examining baseline
indicator levels and desired levels of improvement.
Step 6 of the model, includes both implementation and results monitoring. Monitoring for results
entails collecting quality performance data, for which guidelines are given.
Reporting findings,
Step 8, looks at ways of analyzing and reporting data to help decision makers make the necessary
improvements in projects, policies, and programs.
Step 9, using findings, is also important in generating and sharing knowledge and learning within
governments and organizations.
Finally, Step 10 covers the challenges in sustaining results-based M&E systems including
demand, clear roles and responsibilities, trustworthy and credible information, accountability,
capacity, and appropriate incentives.
The use of such results-based M&E systems can help bring about major cultural changes in the
ways that organization and governments operate. When built and sustained properly, such
Page
13
systems can lead to greater accountability and transparency, improved performance, and
generation of knowledge.(Ten steps--- Book , Jody ZallKusek , Ray C.Rist (The world Bank
2000, page 23-25).
In the project cycle management, the attention given to monitoring and evaluation is
inadequate resulting from the insufficient resource allocation as well as the insufficient
skills and experience;
The roles and responsibilities of monitoring and evaluation are not clear, it is usually
considered as externally imposed obligations by donor and hence the monitoring and
evaluation team gets busy on mechanical aspects such as supporting the project managers
only in data collection and report writing
Monitoring and evaluation system is too dependent on donor assistance and it will
collapse when the funding is terminated. The system is in place without a thorough
analysis and hence relevant issues are not incorporated
The expectation from monitoring and evaluation is very high and it demands much
information to be collected. This information lacks in considering the outreach, effect and
impacts but rather focus only financial and physical aspects of the projects and hence the
monitoring and evaluation information is of poor quality. It is also rather irrelevant as
compared to the actual monitoring and evaluation functions;
There was insufficient, untimely or a lack of feedback and also the needs and aspirations
of stakeholders are overlooked and invisible in monitoring and evaluation;
Page
14
There was a lack of integrations and cooperation between project monitoring and
evaluation and other project management and more importantly poor accountability for
failures; and;
Monitoring and evaluation findings and lessons learnt are not taken in to consideration
for future project design and programming
Factor of project successes or failures are not only the issues of developing countries but also the
developed ones though it seems associated with only the former ones. Ethiopia has commenced
socio economic and political system management since mid-1930s from feudo – capitalist to
socialist oriented and market oriented with decentralized management.
In the three systems, the public sectors have played a leading role in the planning, execution,
monitoring and evaluation and close out of projects. According to Temesgen, 2007, the public
sectors progress report findings on the project implementation showed that projects were over or
under budgeted and did not complete within the planned period. Furthermore, the researcher
noted that most projects failed due to the institutional management difficulties, problems related
to policy and resources and technical related problems.
The reason behind project failure in Ethiopian public sectors is project evaluations and poor
planning as researched by Getachew (2010). This limited the attention given to evaluation both
at strategic and grass root levels. Considering evaluations as impositions from donors resulted
the lack in commitment, poor communication in project, program, and impact of policies in
designing information collection platforms. Other results of this attitude include: lack in
integrations amongst different actors in the evaluation systems at a diverse level; evaluation
findings and lessons learnt not being used for programming and making informed decisions,
Page
15
narrowing the scope of evaluation only to physical report and financial dimensions; limiting
capacity of evaluations at both individual and systematic level.
One of the major factors in project failure in Ethiopian public sectors is weak project monitoring
and evaluation. However, the project monitoring and evaluation system should be well designed
in order to track progresses, improve the intended level of efficiency, to keep the project on
course and to examine whether or not projects are up to meet the objectives (MoFED, 2008).
The Project Life Cycle refers to a logical sequence of activities to accomplish the project’s goals
or objectives. Regardless of scope or complexity, any project goes through a series of stages
during its life. There is first an Initiation or Birth phase, in which the outputs and critical success
factors are defined, followed by a Planning phase, characterized by breaking down the project
into smaller parts/tasks, an Execution phase, in which the project plan is executed, and lastly a
Closure or Exit phase, that marks the completion of the project.
As of June 30, 2016 the country office has incorporated the following indicators related to
project management, advocacy and policy development, project quality and budget as of the KPI
where the line managers should sit together with the one to one session and continuously assess
and strengthen the capacity of the staffs. The detail implementation plan, monitoring and
evaluation Plan, budget versus accomplishments, phased budget, and IPTT (Indicator
Performance Tracking Table) are some of the deliverables expected from the project managers.
In all the stages of the project life cycle, the role of monitoring and evaluation as well as the
project team has to work hand in hand to change the lives of children.
Page
16
2.4. Conceptual Framework
The framework depicts the relationships between monitoring and evaluation and project success
as mediated by management support. It is conceptualized that the factors influencing project
success are effective strength of monitoring team, approach used by monitoring and evaluation
team in evaluating projects, accountability specified as information sharing, participation and
complaint and response mechanism; and the stage of project lifecycle. The monitoring and
evaluation activities, accountability and project success are all geared towards achievement of
value addition to the organization.
This emphasis on constant re-evaluation of the effects of work including networking and
advocacy allows program staff to hold themselves and their program to higher standards of
accountability and impact. It also empowers them to prioritize learning as a valued outcome that
is essential to quality programming. By presenting monitoring and evaluation as much more than
reporting, i.e. as a tool for re-planning throughout the program cycle, the researcher begins to see
it as the engine room of the change that the project seeks. Finally, the tool is heavily visual and
has been produced with engaging illustrations that make it very well suited to translation.
Independent variables
Project success
Monitoring and
evaluation competency -quality
-cost
-time
Downward accountability
Page
17
Chapter Three
3. Research Design and Methodology
According to Kothari, (2004) the formidable problem that follows the task of defining the
research problem is the preparation of the design of the research project, popularly known as the
“research design”. Decisions regarding what, where, when, how much, by what means
concerning an inquiry or a research study constitute. A research design is the arrangement of
conditions for collection and analysis of data in a manner that aims to combine relevance to the
research purpose with economy in procedure. In fact, the research design is the conceptual
structure with in which research is will conducted; it constitutes the blue print for the collection,
measurement and analysis of data.
Explanatory and Descriptive research design were used for this research as it enabled the
researcher to describe Monitoring and Evaluation practice in Compassion International Ethiopia
projects successes. The research objective is also assess and explain whether the Monitoring and
Evaluation practice are contributing to the success of the projects.
Page
18
acquire a general picture of the problem. While the collection of the required data and
information from the primary sources, questionnaire were used to get information on framework
of the study.
Employees included in
2. Program Department 38 6
3. Business Department 22 4
4. Four projects 40 20
Total 142 65
Page
19
The above numbers of employee are not involved directly in monitoring and evaluation
processes, only those who have actively part of monitoring and evaluation using direct related
employees have gave a better result. So the researcher is used Non –probability sampling for 65
employees that were directly involved in the Head office and at the projects. The selected
employees are partnership facilitator, department managers and project Directors.
The primary data was collected by the researcher through survey questionnaire, key informant
interview and they was self-administered and secondary data were collected and merged with the
primary data. The primary sources include: Compassion International senior management
team, middle level managers and monitoring and evaluation experts by employing both
questionnaire and key informant interview. Secondary data sources include: different records of
the organization’s narrative annual reports, evaluation reports, audit reports, monitoring visit
reports and related documents
Page
20
with what validity data can be said to indicate any conclusions. Both quantitative as well as
qualitative techniques of data analysis would uses percentage, tables and charts with the help of
IBM SPSS Statistics version 23 statistical computer software.
While β1, β2, β3, and β4 are coefficient of each independent variable and € is the error term
3.8. Reliability and Validity
According to Saunders et al. (2009), internal validity in relation to questionnaires refers to the
ability of the questionnaire to measure what the researcher intends it to measure. To achieve this,
questions in the questionnaire are emanated from the broad research questions tailored to meet
research objectives.
Content validity, on the other hand, refers to the extent to which the measurement device, in this
case the measurement questions in the questionnaire, provides adequate coverage of the
investigative questions. This is achieved by providing a 5 scale likert scale for addressing a range
of alternatives.
Criterion-related validity, sometimes known as predictive validity, is concerned with the ability
of the measures (questions) to make accurate predictions. This is achieved by providing a range
of different sets of questions that cover main project success issues at the same time giving rich
and in-depth information.
Page
21
Reliability, on the other hand, refers to consistency. It refers to the extent to which the data
collection techniques or analysis procedures will yield consistent findings. According to Gliem
(2003), when using Likert-type scales it is essential to calculate and report coefficient for internal
consistency reliability. But because Cronbach’s alpha does not provide reliability estimates for
single items, the analysis of the data must use the summated scales or subscales and not
individual items. In this study, Cronbach’s alpha test is calculated for the 22 Likert-style items
using SPSS statistical software and the result is presented in the following table.
Table 3.1: Cronbach's Alpha reliability test
Reliability test
Cronbach's
Alpha
No of items
0.837 22
Source: Own survey 2020
Cronbach’s alpha measures the reliability of research tools. For this study the Alpha coefficient
for the overall scale calculated as a reliability indicator is .837. The values of Cronbach’s alpha
more than 0.7 is good. The alpha values in this study are far more than 0.7 and which are;
therefore, it had very good reliability for the questionnaires.
Page
22
Chapter Four
Results and Discussions
4.1. Data presentation and interpretation
There are several software packages for processing quantitative data some of which are broader
in scope and user friendly like the SPSS. After data are collected in a manner that can enable the
researcher to have concrete information to address the objective of the study, it was edited, coded
and entered in to a Statistical Package for Social Science (SPSS) version 20 for analysis. In the
scope of the survey, 65 questionnaires were distributed to employees in International Ethiopia
Head office senior manager, Department managers, monitoring and Evaluation experts, project
partnership facilitators and project Directors and other project workers. The data collected from
them were later used to assess project success. Moreover, the responses of the subjects are
presented, analyzed, and interpreted using SPSS 20, reliability tests, and other descriptive
statistics such as Mean, and standard deviation. . Out of a total of 65 respondents, 58 (89.2 %)
filled and return the questionnaires .Therefore it can be concluded that majority of the
respondents returned the questionnaire with answers. Therefore, the researcher used all the
questionnaires returned.
Page
23
Table 4.1 shows: As the gender profile shows by frequency and percentage in table 1, out of the
59 respondents 51 are male while 8 were female with a percentage of 86.4% and 13.6%
respectively. Their proportion is large; the organization is advised to encourage the involvement
of female.
PhD 1 1.7
MBA/MSC 27 45.8
Valid BA/BSC 22 37.3
Diploma and below 9 15.3
Total 59 100.0
7(11.9%) position t. The study noted that, the organization has competent and easy to learn qualified
employees.
Page
24
4.3.1 Descriptive analysis for Measurement of Project Success
Table 4.4.Measurement of Project Success
s.n Mean Std. level Rank
Deviation
2.1 Projects are completed at the planned time 4.24 0.703 agree 3
2.2 Projects are completed within the planned budget 4.29 0.671 agree 2
2.3 Project have national as well as international quality 4.37 0.667 agree 1
standard that must be met
2.4 Project beneficiaries are satisfied and impacted 4.15 0.715 agree 4
positively
2.5 Projects realized meet the planned objective and 4.12 0.853 agree 5
outcome that are intended to achieve
Table 4.4 shows that In order to find out respondents opinion about the project success in
projects, the following summary of respondents is discussed as follows. As can been seen in
Table 4 regarding their agreement on the level of Projects are completed at the planned time,
majority of respondents above 91.5 % agreed that , Projects are completed at the planned time.
The mean value (4.24) for this factor also shows that, this kind of relationship is major
contributing factor for success in project
The researcher also directly raised a question regarding the Projects are completed within the
planned budget, based on this the majority (91.5%) of respondents agreed Projects are completed
within the planned budget. The mean value (4.29) also support this opinion that from project
success actors factors, this factor is the second factor that affect success or performance in
projects
As seen from the above table, majority(93.3%) also agreed that , the Project have national as
well as international quality standard that must be met, that is having mean (4.37) .From here it
Page
25
can be concluded that meeting national as well as international quality standard , is an important
aspects and major factor or successful performance in Projects .
From the above table it is possible to see, majority(88.1%) also agreed that , Project
beneficiaries are satisfied and impacted positively , that is having mean (4.15) .From here it can
be concluded that satisfying Project beneficiaries , is an important aspects and major factor or
successful performance in Projects .
As seen from the above table, majority(83.1%) also agreed that , the Projects realized meet the
planned objective and outcome that are intended to achieve , that is having mean (4.12) .From
here it can be concluded that meeting planned objective and outcome that are intended to achieve
, is an important aspects and major factor or successful performance in Projects
3.3 The monitoring and evaluation system is built with a thorough situational 4.42 0.770 agree 2
analysis
3.4 The monitoring and evaluation system has buy - in from the senior 4.44 0.650 agree 1
management team
3.5 The monitoring and evaluation system reflects the theory of change and 4.27 0.739 agree 3
supports the mission and vision of the organization
As a Table 4.5 shows that, With regard to “The monitoring and evaluation System is effective,
efficient and contributes to achieve the project objective”, the majority of respondents 89.9%
were agreed the statement having mean 4.24.
As seen from the above table, majority 88.2 % also agreed that, the scope and purpose of the
monitoring and evaluation system is clear that is having mean (4.22), most of respondents
93.2% also agreed that the monitoring and evaluation system is built with a thorough
Page
26
situational analysis, having mean (4.42) . Moreover, most of respondents, 91.5% responded to
agree that The monitoring and evaluation system has buy - in from the senior management team
with mean (4.44). From the above Monitoring and Evaluation Practices also, most of
respondents 89.9% agreed to The monitoring and evaluation system reflects the theory of
change and supports the mission and vision of the organization, having mean value of 4.27.
From the above findings it can be concluded: Overall Monitoring and Evaluation Practices in
the organization, is significant factor which is important aspects of and factor in project
performances.
4.3 The organization has system in place to listen to the people it aim to 4.66 0.576 Strong 1
assist ,incorporating their views, concerns and influence the program
agree
decision in project cycle management
4.4 The organization has a system to build the capacity (knowledge, skills 4.58 0.622 Strong 2
and attitudes) of children to anticipate in project/program development,
agree
Table 4.6 indicated that, With regard to “The organization has system in place to ensure that the
children it aims to assist and other stakeholders have access to timely, relevant and clear
information about the organization, program, project and its activities”, the majority of
respondents 94.9% were agreed the statement having mean 4.46. As seen from the above table,
majority 94.8 % also agreed that, The organization has a system to analyze the information
collected from stakeholders to further improve the quality of program that is having mean (4.44),
most of respondents 97.5% also agreed that the organization has system in place to listen to the
people it aim to assist, incorporating their views, concerns and influence the program decision in
Page
27
project cycle management, having mean (4.66) . Moreover, most of respondents, 95.5%
responded to agree that the organization has a system to build the capacity (knowledge, skills
and attitudes) of children to anticipate in project/program development, with mean (4.58). From
the above Monitoring and Evaluation competency requirement all criteria have mean values
above 4 From the above findings it can be concluded: Overall Monitoring and Evaluation
competency in the organization , is significant factor which is important aspects of and factor in
project performances .
4.3.4 Descriptive Analysis on an assessment of Downward Accountability
Table4.7 assessment of Downward Accountability
S.N Mean Std. Level Rank
. Deviation
The organization has a system in place to 4.66 0.576 Strongly 1
5.1 agree
incorporate children's participation in
project/program development,
implementation, monitoring and
evaluation.
5.2 The organization has a system in place to 4.58 0.675 Strongly 2
agree
enable beneficiaries it aims to assist and
other stakeholders to provide feedback
and receive response through effective,
accessible and safe information sharing
mechanisms and processes
5.3 The organization has system in place to 4.39 0.743 agree 3
Page
28
Table 4.7 shows that, With regard to “The organization has a system in place to incorporate
children's participation in project/program development, implementation, monitoring and
evaluation.”, the majority of respondents 98.3% were agreed the statement having mean 4.66. As
seen from the above table, majority 96.6 % also agreed that, The organization has a system in
place to enable beneficiaries it aims to assist and other stakeholders to provide feedback and
receive response through effective, accessible and safe information sharing mechanisms and
processes that is having mean (4.58).Moreover, most of respondents, 95.0% responded to agree
The organization has system in place to store, verify and analyze the feedback, complaints and
use for future programming and take an input for quality program delivery, with mean (4.39).
From the above Downward Accountability requirement all criteria have mean values above 4
from the above findings it can be concluded: Overall Downward Accountability in the
organization is significant factor which is important aspects of and factor in project
performances.
4.3.5 Descriptive Analysis on An assessment of project life cycle in your project
Table 4.8 an assessment of project life cycle in your project
S.N mean Std. Level Rank
deviation
6.1 The engagement of monitoring and evaluation staff in the 4.49 0.626 agree 3
initiation stages of project is high
6.3 The engagement of monitoring and evaluation staff in the 4.51 0.569 Strongly 2
planning stages of project is high agree
6.4 The engagement of monitoring and evaluation in the 4.44 0.595 agree 4
execution stages of project is high
6.5 The engagement of monitoring and evaluation 4.39 0.616 agree 5
in the evaluation stages of a project/program is high
As a Table 4.8 shows that, With regard to “The engagement of monitoring and evaluation staff in
the initiation stages of project is high”, the majority of respondents 96.6% were agreed the
statement having mean 4.49. As seen from the above table, majority 91.6 % also agreed that , the
role of monitoring and evaluation in baseline development is high that is having mean (4.56),
most of respondents 96.8% also agreed that the engagement of monitoring and evaluation staff in
the planning stages of project is high, having mean (4.51) . Moreover, most of respondents,
Page
29
98.7% responded to agree that The The engagement of monitoring and evaluation in the
execution stages of project is high with mean (4.44). From the above project life cycle in the
project also, most of respondents 96.6% agreed to the engagement of monitoring and evaluation
in the evaluation stages of a project/program is high, having mean value of 4.39. From the
above findings it can be concluded: Overall project life cycle in the project is significant factor
which is important aspects of and factor in project performances.
Project Success 59 59 59 59 59
Monitoring
59 59 59 59 59
Evaluation
N Competency 59 59 59 59 59
Downward
59 59 59 59 59
accountability
Project life cycle 59 59 59 59 59
Source; own survey 2020
Page
30
As a Table 4.9 indicated that For interpreting correlation coefficient intervals: 0 to 0.20 corresponds
to a very weak relationship; 0.21 to 0.40 corresponds to a weak relationship, 0.41 to 0.60 corresponds
to a moderate relationship, 0.61 to 0.80 corresponds to a strong relationship, and 0.81 to 1.00
corresponds to a very strong relationship, Cohen (2003).
Therefore, from the above correlation result illustrated in table4.9, it is possible to see that, there is
significant, positive and strong relation between Monitoring and Evaluation as a factor and project
success (r= .629, sig= .000). There is also significant, positive and moderate relation between
evaluators competency as a factor and project success (r= .495, sig= .000). There is also significant,
positive and moderate relation between Downward Accountability as a factor and project success (r=
.4290, sig= .000) . There is also significant, positive and weak relation between Project life cycle and
project success (r= .377 sig= .002). From the above correlation analysis it is possible to infer that all of
the above identified project success factors are correlated with project success which is measured in
terms of project accomplishment requirements.
In this study, multiple regressions were conducted in order to examine the relationship between all of
the significantly correlated factors with another dependent variable project success..
In conducting the multiple regression analysis, several main assumptions were considered and
examined in order to ensure that the multiple regression analysis was appropriate (Hair et al., 2006).
The assumptions to be examined are as follow: (1) outliers, (2) normality linearity and
homoscedasticity, and (3) muliticollinearity In order to see outliers, it is needed to check Data
whether there are any potential outliers existing in the analysis. Pallant (2007) noted that “multiple
regression is very sensitive to outliers (i.e. very high or low score)” Thus, outliers should be removed
before running the regression analysis (Tabachnick & Fidell, 2007). Multivariate outliers can be
detected by using statistical methods such as case wise diagnostics .During conducting multiple
regression and Collinearity Diagnostics, 5 outlier was detected and removed. One of the assumptions
to be examined is normality linearity and homoscedasticity, In order to check normality a graph is
plotted using SPSS regression graph .The below graph shows the assumption of normality is
accepted,
Page
31
Figure1 histogram
Moreover, to check linearity, a graph is plotted using SPSS regression graph .The below graph shows the
assumption of linearity is met.
Page
32
Figure 3 scatter plot
The above Figures and graph show the assumption of linearity, normality and homoscedascitity have been
met. Moreover, in this case tolerance is much higher than 0 which is (0.373-0.500) - coefficient table4.12
.Hence, multicollinearity is not a threat to the substantive conclusions of this study and the B and Beta
coefficients are stable (Variance Inflation Factor) is (1.999- 2.688) simply the reciprocal of tolerance.
Therefore, when VIF is higher than 10, there is high multi co-linearity and instability of the B and Beta
coefficients. In this case since VIF are less than 10, thus multicollinearity is not a threat to the substantive
conclusions of this study and the B and Beta coefficients are stable
Table 4.10 Model summary of multiple regression
4.4.2.1Model Summaryb
Mod R R Adjusted R Std. Error Change Statistics Durbin-
el Square Square of the Watson
R Square F df1 df2 Sig. F
Estimate
Change Change Change
a. Predictors: (Constant), project .life cycle , Monitoring and Evaluation , competency, Downward Accountability
b. Dependent Variable: Project Success
R2is a measure of how much of the variability in the outcome (in this case Project success is
accounted for by the predictors (i.e factors of project success), As shown in table 4.10, R2value is
0.48, which means that the mentioned factors of project success as a whole causes 48 % of the
variation in project success .. The ANOVA table also suggests that the model is quite significant
Page
33
in explaining the variances. The significance result at p < 0.05(0.000) provides support for the
significant.
4 variance of analysis
1. Table ANOVAa
ANOVAa
Total 17.932 58
As ANOVA Table shown on the other hand depicts that the model is a good fit
(F=12.482, DF1=4, DF2=54, p < 0.0001). That is, the sum of squares of
variation in the Project success due to the effects of the latent variables
(SSR=8.615) is more than the variation imposed by random effects (9.317).
Page
34
4.4.2.3 Coefficients
Table 4.12. Coefficients
Coefficientsa
Model Unstandardized Standardized t Sig. Collinearity Statistics
Coefficients Coefficients
As shown from the above coefficient table, B column shows the values for the regression for predicting
the dependent variable from the independent variable. Std. Error column shows the standard errors
associated with the coefficients. Beta (standardized coefficients) is a measure of how strongly each
predictor variable influences the criterion variable. These are the coefficients obtained if all of the
variables in the regression are standardized, including the dependent and all of the independent variables,
and the magnitude of the coefficients can be compared to see which one has more of an effect. The Beta
(β) coefficient is the standardized regression coefficients. Their relative absolute magnitudes for a given
step reflect their relative importance in predicting perceived model value.
The SPSS generated outputs as presented in table above the equation, (Y = β0 + β1X1 + β2X2 + β3X3 +β4X4
+ ε) becomes; Y= (1.107X0) + (3.645X1) + (2.826X2) + (1.079X3) + (1.008X4)
The latent variables such as monitoring and evaluation (t=4.682, p<0.0001), competency
(t=4.160, p<0.0001), downward accounts (t=2.980, p=.038), project life cycle (t=2.242, p=.047),
are statistically significant at 5 % level of significance factors of project effectiveness and
efficiency. Form the regression equation, all of factors taken to account; monitoring and
evaluation, competency, downward accounts and project life cycle.
Page
35
On the other hand, holding other factors constant, a unit change in monitoring and evaluation
when holding the other factors constant would lead to a 3.645 improvement in project success, a
unit change in competency when holding the other factors constant would lead to a 2.826
improvement in project success; a unit change in downward accounts when holding the other
factors constant would lead to a 1.079 improvement in project success while a unit change
project life cycle when holding the other factors constant would be lead to a 1.008 improvement
in project success.
Page
36
Chapter Five
Summary, Conclusion and Recommendations
5.1. Introduction
This chapter gives a summary of key findings of the study presented according to the objectives of the
study. Conclusions are drawn from the findings and recommendation is provided to help investigate the
role of monitoring and evaluation functions in achieving project success and also assesses the monitoring
and evaluation practices.
The monitoring and evaluation team are affected by the availability of budget, its effective utilization of
the budget as well as the absence of monitoring and evaluation staff. The role of monitoring and
evaluation towards the sustainability of projects are also given a weak weighted average mean which
implies the monitoring and evaluation system and the team competency have to help for a project to
sustain beyond the project period.
The research findings revealed that the complaint and response mechanisms and the child participation
have given a low weighted mean implying that CIE has to go a lot in terms of making the accountability
mechanisms more robust within the different projects and mandates. The other findings in this connection
is that there is no system for the staff of a project to raise concern with regard to management or
leadership as the only system we have is the anonymous confidential system which help to stop fraud.
The finding showed that there is a positive relationship between the role of monitoring and evaluation
functions and project success. This means that the monitoring and evaluation system is in place. It also
means that the role of monitoring and evaluation in project cycle management, the strengthening of the
monitoring and evaluation function in improving the downward accountability mechanisms and also the
monitoring and evaluation team competency are contributing to the success of projects.
Page
37
Thus the presence of a sound monitoring and evaluation system helps a lot in project success but its
absence does not necessarily result in project failure. The monitoring and evaluation contributions are
specified in using the installment of a system by recruiting a competent staff and continuously
strengthening the capacity, strengthening the internal accountability mechanisms as well as the sound
involvement of the monitoring and evaluation expert along the project cycle stages. There are actually
other parameters which can contribute to the project success but the dimensions researched have
contributed to the project success.
The monitoring and evaluation expert involvement along the project life cycle stages are of a varied
understanding saying some has to participate in the whole project life cycle, some still say only in the
baseline, evaluation and monitoring, still some pother say in the planning stage of a project. It is also
reflected from CIE project managers do not have a certified project managers and are not well conversant
on the tools and techniques that is why the monitoring and evaluation tools are not properly used as one of
the other project management tools. The multiple regression analysis models the linear relationships
between the dependent variables and the independent variables which were: the monitoring and evaluation
system, the monitoring and evaluation team competency, the downward accountability and project life
cycle stage. According to the results of the regression analysis, the independent variable explains of 47
percent of success (R2). The F statistics (ANOVA) for the model was 8.437 which was significant at 5
percent level of significance (P value was 0.000 which was less than 0.05). The coefficients table provides
the necessary information to predict success from monitoring and evaluation system, monitoring and
evaluation competency, strengthen downward accountability as well as life cycle stages.
5.3. Conclusion
The key role of monitoring and evaluation function is to provide evidence based feedback to the
management which helps as input for decision making and track the project progress. The research
problem that this study intends to address was that the role of monitoring and evaluation functions in
achieving project success.
In response to the research problem and hence answering the research questions, this study gathered and
analyzed data which has led to this conclusion. This research then concluded that generally projects
implemented by CIEt. are successful. The success of these projects was the results of strong monitoring
Page
38
and evaluation system, competent monitoring and evaluation team, strong downward accountability
mechanism and closely monitoring the projects at all stages of the project life cycle.
5.4 Recommendations
Based on the findings of the study, the researcher has given the following recommendations for CIE the
latter to take in its monitoring and evaluation strategic direction and future programming.
The findings revealed that the budget allocated for M&E support specifically for monitoring and
evaluation experts as well as activities has not been adequate.
The monitoring and evaluation practice will be improved if projects are implemented according to
the plan and concrete decisions are made on issues identified during project monitoring.
Project and program managers do not use the M&E tools as one of the project/program
management tool. The researcher recommended that M&E tools should be part of the key
performance indicators where they will be accountable for taking actions or in actions.
Page
39
References
Abbasi, et al, 2014, Project Failure Case Studies and Suggestion, International Journal Computer
Applications
Ferris, James M.; Graddy, Elizabeth (1994). Organizational choices for public service
21 (5), 309-319.
Kultar.Singn (2007).Quantitative social research methods. Los Angeles, CA. Sage, organizing
Muriithi, N., & Crawford, L. (2003). Approaches to project management in Africa: implications
MoFED (2008). National Economic Parameters and Conversion Factors for Ethiopia, (Third
Naidoo, I. A. (2011). The role of monitoring and evaluation in promoting good governance in
Page
XL
Prabhakar, G. P. (2008). What is Project Success: A Literature Review International Journal of
Save the Children International (2016). Department of program development and quality (P
piloting report.DQ): Monitoring, evaluation, accountability and learning unit quality benchmark
Ten steps--- Book , Jody ZallKusek , Ray C.Rist (The world Bank ) page 23-25).
Page
XLI
Annexes
Questionnaire for M&E and Project Management Expert
Jimma University
School of Post Graduate Study
Questionnaire on “the role of monitoring and evaluation functions in achieving project success”
In Compassion international Ethiopia Assisted projects .
Questionnaire
Dear Respondent,
Your participation in this questioner is voluntary; you will not be paid for your participation.
You may withdraw from the study at any time without penalty or harm of any type. If you
decline to participate in or choose to not complete the questionnaire, the researcher will not
inform anyone of your decision, and no foreseeable negative consequences will result.
Completing the questionnaire will require approximately 25minutes. There are no known risks
associated with completing the questionnaire. If, however, you feel uncomfortable in any way
during this process, you may decline to answer any question, or not complete the questionnaire.
The researcher will not identify you by name in any report using information obtained from your
questionnaire; your confidentiality as a participant in this study will remain secure. Subsequent
uses of data generated by this questionnaire will protect the anonymity of all individuals.
Page
XLII
Part one General information about the respondent
2.6 Are there any other project success indicators which are missed in the above list? If so,
please
Specify below: ----------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------
Page
XLIII
Part Three: Assessment of Monitoring and Evaluation Practices, more specifically monitoring
and Evaluation System,
SDisagree Neutral Agree Strongly
S.N t1 2 3 4 5
Agree
r
o
3.1 The monitoring and evaluation n
system is effective, efficient and ~
contributes to achieve the project
g .
objective l
3.2 y
Page
XLIV
Monitoring and evaluation competency
1 2 3 4 5
Page
XLV
5.3 The organization has system in place to
store, verify and analyze the feedback,
complaints and use for future programming
and take an input for quality program
delivery
5.4. What do you think is the role of monitoring and evaluation to improve the
downward accountability mechanisms? -------------------------------------------
Strongly
Disagree Neutral Agree Strongly Agree
S.N
Disagree
1 2 3 4 5
project life cycle in your project
6.1 The engagement of monitoring and evaluation staff
in the initiation stages of project is high
6.2 The role of monitoring and evaluation in baseline
6.3 development is high
The engagement of monitoring and evaluation staff
in the planning stages of project is high
6.4 The engagement of monitoring and evaluation in the
execution stages of project is high
6.5 The engagement of monitoring and evaluation in the
evaluation stages of a project/program is high
evaluation in the closing stages of project is high
Page
XLVI