Mesfin Assefa Tekelie Paper

Download as pdf or txt
Download as pdf or txt
You are on page 1of 57

THE EFFECTS OF MONITORING AND EVALUATION PRACTICE ON

PROJECT SUCCESS IN NGOS; A CASE STUDY ON PROJECTS FUNDED BY


COMPASSION INTERNATIONAL ETHIOPIA

A thesis Submitted to the School of Graduate studies of Jimma University in


Partial Fulfillment of the Requirements for the Award of the Degree of Master of
Arts in Project Management and Finance

BY

MESFIN ASSEFA TEKELIE

JIMMA UNIVERSITY
College of Business and Economics
Department of Accounting and Finance

JUNE, 2020
Addis Ababa, Ethiopia
THE EFFECTS OF MONITORING AND EVALUATION PRACTICE ON
PROJECT SUCCESS IN NGOS; A CASE STUDY ON PROJECTS FUNDED BY
COMPASSION INTERNATIONAL ETHIOPIA

By

Mesfin Assefa Tekelie

Under the Guidance of;

Mr. Abel Worku ( Assistant Professor)

And

Mr. Mohammed Getahun (Assistant Professor)

A thesis Submitted to the School of Graduate studies of Jimma University in


Partial Fulfillment of the Requirements for the Award of the Degree of Master of
Arts in Project Management and Finance

Jimma University
College of Business and Economics
Department of Accounting and Finance

JUNE, 2020
Addis Ababa, Ethiopia
JIMMA UNIVERSITY
COLLEGE OF BUSINESS AND ECONOMICS
DEPARTMENT OF ACCOUNTING AND FINANCE

THE EFFECTS OF MONITORING AND EVALUATION PRACTICE ON


PROJECT SUCCESS IN NGOS; A CASE STUDY ON PROJECTS FUNDED BY
COMPASSION INTERNATIONAL ETHIOPIA

BY

MESFIN ASSEFA TEKELIE

APPROVED BY BOARD OF EXAMINERS

______________________________ ________________________

Dean graduate studies signature

Mr. Abel Woreku (Assistant professor) ________________________

Advsior signature

Hailmcheal Mule (ph.D) ________________________

External examiner signature

Mr. Abey Gethahun (assistant professor) ________________________

Internal examiner signature


DECLARATION
“I, hereby, declare that this thesis entitled the effect of Monitoring and Evaluation Functions in Achieving
Project Success: (The Case of CIET financed Projects) is my original work, I further confirm that the
thesis has not been submitted either in part or in full to any other higher learning institution for the
purpose of earning any degree.

Declared by Date signature

Mesfin Assefa ____________________ _________________

CERTIFICATE

This is to certify that the Thesis entitles ““The Effect of Monitoring and Evaluation practice on projects
success; A case of Compassion International Ethiopia financed projects’’ for the award of the Degree of
Master of Project Management and Financing (MPMF) and is a record of research work carried out by
Mr. Mesfin Assefa.

Under our guidance and supervision we hereby declare that no part of this Thesis has been submitted to
any other university or institutions for the award of any degree or diploma.

Main Adviser’s Name Date Signature

Mr. Abel Worku (Assistant Professor), ____________ _______________

Co-Advisor’s Name Date Signature

Mr. Mohamed Getahun (Assistant professor) _____________ _______________

Page I
ACKNOWLEDGEMENT
This work comes into end not only by the effort made by the researcher but also the support of
many individuals and organizations. To begin with, I would like to express my sincere gratitude
and thanks to Mr. Abel Worku (Assistant professor) & Mr. Mohamed Getahun (Assistant
professor), for their constructive advices, support and helpful recommendations throughout the
course of this study, His down to earth personality, attention to details, deep knowledge,
constructive criticism, and continuous support as an example, I hope to match someday. Had it
been without their support, this work would not have been come in to reality. My credit and
gratitude also goes to my families to their support in giving valuable idea and much love.

Last but not least, I would like to express my great thanks to all the study participants in sharing
pertinent idea, information and providing different materials important for the thesis work.

Page II
Table of contents

Contents
DECLARATION ----------------------------------------------------------------------------------------------------- I
CERTIFICATE ------------------------------------------------------------------------------------------------------- I
ACKNOWLEDGEMENT ----------------------------------------------------------------------------------------- II
Table of contents --------------------------------------------------------------------------------------------------- III
List of tables ---------------------------------------------------------------------------------------------------------- V
List of figures-------------------------------------------------------------------------------------------------------- VI
Acronyms ----------------------------------------------------------------------------------------------------------- VII
Abstract ------------------------------------------------------------------------------------------------------------ VIII
Chapter One ----------------------------------------------------------------------------------------------------------- 1
1. Introduction -------------------------------------------------------------------------------------------------------- 1
1.1. Background of the Study ------------------------------------------------------------------------------------------- 1
1.2. Statement of the Problem ------------------------------------------------------------------------------------------- 4
1.4. General Objective ---------------------------------------------------------------------------------------------------- 5
1.4.1 Specific Objectives ---------------------------------------------------------------------------------------------- 5
1.5. Hypothesis ------------------------------------------------------------------------------------------------------------- 5
1.6. Limitation of the study ---------------------------------------------------------------------------------------------- 6
1.7. Scope of the Study ------------------------------------------------------------------------------------------------- 6
1.8. significant of the study ---------------------------------------------------------------------------------------------- 7
1.9. Organization of the Study ------------------------------------------------------------------------------------------ 7
CHAPTER TWO ----------------------------------------------------------------------------------------------------- 8
2. REVIEW OF RELATED LITERRTAURE------------------------------------------------------------------ 8
2.1. Theoretical Review -------------------------------------------------------------------------------------------------- 8
2.1.1 Monitoring and evaluation system --------------------------------------------------------------------------- 8
2.1.2 Monitoring and Evaluation in project management ---------------------------------------------------- 10
2.1.3 The Benefits of Monitoring and Evaluation for Organizations --------------------------------------- 11
2.1.4 Purposes of Monitoring and Evaluation for Public Organization---------------------------------- 12
2.1.5. Introducing the 10-Step Model for Building a Results-Based M&E System --------------------- 12
2.3. Empirical Review -------------------------------------------------------------------------------------------------- 14
2.3.1. Project Success ------------------------------------------------------------------------------------------------ 15

Page
III
2.3.2. Monitoring and Evaluation --------------------------------------------------------------------------------- 16
2.3.3. Project Life cycle stage -------------------------------------------------------------------------------------- 16
2.4. Conceptual Framework ------------------------------------------------------------------------------------------- 17
Figure. 1 Conceptual Framework ----------------------------------------------- Error! Bookmark not defined.
Chapter Three -------------------------------------------------------------------------------------------------------18
3. Research Design and Methodology ---------------------------------------------------------------------------18
3.1. Research Design --------------------------------------------------------------------------------------------------- 18
3.2. Data source --------------------------------------------------------------------------------------------------------- 18
3.3. Data Collection Procedure ------------------------------------------------------------------------------------- 20
3.4. Data Gathering Instruments ----------------------------------------------------------------------------------- 20
3.4.1. Questionnaire ------------------------------------------------------------------------------------------------ 20
3.4.2. Key Informant Interview---------------------------------------------Error! Bookmark not defined.
3.5. Target Population ------------------------------------------------------------------------------------------------ 19
3.6. Sample and Sampling Techniques --------------------------------------------------------------------------- 19
3.6. Method of Data Analysis ----------------------------------------------------------------------------------------- 20
3.7. Regression model ------------------------------------------------------------------------------------------------- 21
3.8. Data analysis technique ------------------------------------------------------Error! Bookmark not defined.
3.9. Reliability and Validity ------------------------------------------------------------------------------------------- 21
3.10. Ethical Consideration -------------------------------------------------------------------------------------------- 22
Chapter Four ---------------------------------------------------------------------------------------------------------23
Results And Discussions--------------------------------------------------------------------------------------------23
4.1. Data presentation and interpretation ----------------------------------------------------------------------- 23
4.2. Descriptive statistical Analysis -------------------------------------------------------------------------------- 23
4.2 .1.Demographic Characteristics of the Respondents ----------------------------------------------------- 23
4.4. Inferential Analysis----------------------------------------------------------------------------------------------- 30
Chapter Five ---------------------------------------------------------------------------------------------------------37
Summary, Conclusion and Recommendations ----------------------------------------------------------------37
5.1. Introduction -------------------------------------------------------------------------------------------------------- 37
5.2. Summary of Key Findings-------------------------------------------------------------------------------------- 37
5.3. Conclusion ---------------------------------------------------------------------------------------------------------- 38
5.4 Recommendations ------------------------------------------------------------------------------------------------- 39
References ---------------------------------------------------------------------------------------------------------- XL
Annexes ----------------------------------------------------------------------------------------------------------- XLII

Page IV
List of tables

Table 3.1: Cronbach's Alpha reliability test --------------------------------------------------------------------------- 22


Table 4.1. Participants gender -------------------------------------------------------------------------------------------- 23
Table 4.2. Educational qualification of respondents ----------------------------------------------------------------- 24
Table 4.3. Current position ------------------------------------------------------------------------------------------------ 24
Table 4.4.Measurement of Project Success ---------------------------------------------------------------------------- 25
Table 4.5 Assessment of Monitoring and Evaluation Practices, more specifically monitoring and
Evaluation System, --------------------------------------------------------------------------------------------------------- 26
Table 4.6 Assessment of Monitoring and evaluation competency ------------------------------------------------ 27
Table4.7 assessment of Downward Accountability ------------------------------------------------------------------ 28
Table 4.8 an assessment of project life cycle in your project ------------------------------------------------------ 29
Table 4.9 Correlation analysis -------------------------------------------------------------------------------------------- 30
Table 4.10 Model summary of multiple regression ------------------------------------------------------------------ 33
Table 4.11 ANOVAa ------------------------------------------------------------------------------------------------------- 34
Table 4.12. Coefficients --------------------------------------------------------------------------------------------------- 35

Page V
List of figures

Figure1: Conceptual framework…………………………………………………………………17

Figure2: Scatterplot………………………………………………………………………….…..34

Figure3: Histogram………………………………………………………………………………33

Figure4: Probability plot………………………………………………………………………....33

Page VI
Acronyms
SPSS: statistical package for social science

INGO: international nongovernmental organization

HO: head office

M&E: Monitoring and Evaluation

CIEt. Compassion International Ethiopia

Page
VII
Abstract
Monitoring and evaluation of project is an integral part of the project cycle and good management
practice. An effective monitoring and evaluation system is fundamental if the goals of a project are to
be achieved. Through setting up proper monitoring and evaluation systems, planning, efficiency and
proper funds utilization can be achieved to enhance the performance of projects. The general objective
of the paper is to assess the effect of monitoring and evaluation functions in achieving project success.
To achieve the study objective, an explanatory research design along with mixed method approach has
been employed. Primary data were collected through survey questionnaire from 65 project staff
members who were selected using convenience sampling technique. Interview was also conducted with
senior management team members to triangulate the quantitative data obtained from survey with
regression. The findings of the study revealed that: poor practicing monitoring and evaluation system,
team incompetency, week program accountability and project life cycle stage are have far worst effect
on CIE funded projects. The study recommended that based on the analysis of CIE funded projects
should work on improving project success by paying attention to monitoring and evaluation
procedures particularly, prepare adequately work breakdown structures with the expected outcomes
to reducing project ineffectiveness and inefficiency. Using the standardize model of monitoring and
evaluation are contributing to the success of projects and increase the level of the employee
monitoring and evaluation continuous by providing relevant training programs. The study also found
that there is a significance relationship between each of the mentioned factors and the dependent
variable, project success. The researcher recommended that M&E tools should be part of the key
performance indicators where they have being accountable for taking actions or in actions.

Key Words: Evaluation, Monitoring, Project Success and Practices

Page
VIII
Chapter One
1. Introduction
This study has assessed the effect of monitoring and Evaluation of Compassion International
Ethiopia projects. Compassion International Ethiopia is one of the international NGO’s which
has been actively involved in Ethiopia in varieties of developmental and humanitarian activities
since 1993 G C. The research will shows the roll of monitoring and Evaluation of among the
selected Compassion International Ethiopia assisted projects success. Compassion International
has more than 470 projects, a round 110,000 children in Ethiopia, and serving the community in
holistic service, Compassion works primarily through child sponsorship but also have specific
initiatives to help babies and mothers, to develop future leaders, and to meet critical needs. CIE
annual report, (2016)

All of compassion assisted projects are serving children in Holistic service, which are education,
providing educational materials, school fees and Tutorials, In health provision, education of
prevention of disease, treatment during illness, in socio emotional provision children will get
different age related trainings and socio emotional educations with concerned professionals .The
different projects in different areas are designed to contribute to changes in Children’s lives.
Project and program level reports, monitoring reports, minutes of review meetings and
evaluations are used to validate the findings and recognize the role of monitoring and evaluation
in project success. The purpose of this research is to investigate the effect of monitoring and
evaluation practice in achieving project success in Compassion International Ethiopia. CIE
annual report (2016)

1.1. Background of the Study


Monitoring is defined as “a continuing function that aims primarily to provide the management
and main stakeholders of an ongoing intervention with early indications of progress, or lack
thereof, in the achievement of results” World Bank (2007, Pg. 2).According to World Bank,
regular collection of information through continuous monitoring assist project managers in
making timely decision guarantee accountability, and provide the basis for evaluation and
learning. “Monitoring is a type of evaluation performed when the project is being implemented
and the data obtained through monitoring is made use of in evaluation” Bamberger (1986, Pg.3)

Page 1
According to the conceptualization of PMBOK, (2004) highlights various factors that may lead
to project success which includes creating right teams; involving stakeholders; preparing detailed
project scope; influencing stakeholders; information; managing expectation; communication;
negotiation; and monitoring and evaluation. This, therefore, implies that monitoring and
evaluation is one of the critical factors of project success. Equally, several studies have been
carried out focusing on the project success. For example, (Raymond and Bergeron, 2008)
identified several indicators of project success identified in the literature including “reduction of
the time required to complete a task, improved control of activity costs, better management of
budget, improved planning of activities, better monitoring of activities, more efficient resource
allocation, and better monitoring of the project schedule”. Project management is hence
acknowledged as being the most successful approach of managing changes brought about by
projects. This is because it has techniques and tools that enable control and delivery of the
project activities within given deliveries, timeframe and budget (Shapiro, 2011). M & E is one of
the tools that assist project managers track performance and also provide the management with
information to make decisions in regard to the project.

The Organization of European Co-operation for Development (2002) defines monitoring and
evaluation Context (situation) monitoring tracks the setting in which the project/program
operates, especially as it affects identified risks and assumptions, but also any unexpected
considerations that may arise. It includes the field as well as the larger political, institutional,
funding, and policy context that affect the project/program. For example, a project in a conflict-
prone area may monitor potential fighting that could not only affect project success but endanger
project staff and volunteers.

Beneficiary monitoring tracks beneficiary perceptions of a project/program. It includes


beneficiary satisfaction or complaints with the project/program, including their participation,
treatment, access to resources and their overall experience of change. Sometimes referred to as
beneficiary contact monitoring (BCM), it often includes a stakeholder complaints and feedback
mechanism .

The Organization of European Co-operation for Development (2002) defines monitoring and
evaluation as; Monitoring is a continuing function that uses systematic collection of data on
specified indicators to provide management and the main stakeholders of an ongoing

Page 2
development intervention with indications of progress and achievement of objectives and
progress in the use of allocated funds. Evaluation, on the other hand is the systematic assessment
of an ongoing or completed project, program or policy, its design, implementation and results.
The aim is to determine the relevance and fulfillment of objectives, development efficiency,
effectiveness, impact and sustainability.

Globally, Australia is one of the leading countries in the world in embracing M&E systems in the
development projects (UNDP, 2002). The government created a fully-fledged government
evaluation system, managed by the Department of Finance (DOF). This provided a spending
baseline and freed up the budget process from a detailed, line item scrutiny of spending, to focus
instead on changes in government policy and spending priorities in the development projects.
The government of Australia advocated the principles of program management and budgeting,
with a focus on the efficiency and effectiveness of government programs, through sound
management practices, the collection of performance information, and the regular conduct of
program evaluation (Mackay, 2011). What should the M&E Plan include? The M&E Plan
should include an introduction and an indicator matrix table. The introduction should briefly
describe the aims and objectives of the program, methodologies used to obtain the data, the
planned interventions to be implemented, the critical assumptions underlying the intervention
and the anticipated critical hindrances that might have an effect on the project.

Government M&E systems in Africa operate in complex terrain. To some extent they are
hostages to other forces in government, nevertheless given a results-driven reform agenda,
incentives can be put in place for the evidence generated to support developments in delivery,
budgeting, and monitoring and evaluation are consistently designed to support valued change in
people‟s lives, particularly the underprivileged (Nabulu, 2015). In Kenya, the monitoring and
evaluation systems has not been that effective due to several challenges especially in the
government sector. In the year 2005, the then Ministry of Planning and National Development
commissioned work on the design of an appropriate framework for Monitoring and Evaluation
(M and E) in the National Development Program. This proposed Monitoring and Evaluation
framework has not been fully operational, for example in this view, is supported by Wanjiru
(2008) who indicated in her Social Audit of CDF that, monitoring and reporting should be
strengthened and deepened in all CDF projects.

Page 3
1.2. Statement of the Problem
Monitoring and Evaluation (M&E) provides government officials, development managers, the
public and private sector and civil society with better means for learning from past experience,
improving service delivery, planning and allocation of resources and demonstrating results as
part of accountability to key stakeholders (International Finance Corporation(IFC), 2008). It
brings institutional development, refers to the creation or the capacity of an institution to reflect
systematically and rigorously upon its role and function, and better enable them to carry out their
responsibilities. It reflects an attempt to introduce change and development in the way the
institution is organized so that it is better able to meet its mission (World Bank, 2005).

The performance projects depend on various factors. One of the key factors for project success
is having a good monitoring and evaluation system and practices. Project monitoring and
evaluation is an important element of the program management as it adds value to the overall
efficiency of project implementation by offering corrective actions to the variances to the
expected standard. Project Directors are required to undertake more exercising monitoring and
evaluation of projects and develop framework and guidelines for measuring performance.

Preliminary assessment of Compassion International Ethiopia and assisted projects shows that
Monitoring and Evaluation has a number of challenges. There is a Monitoring and Evaluation
system in different programs and at country office; however, the system is not efficient and
effective. In some cases, the project monitoring and evaluation system does not exist, projects
did not routinely monitored, the monitoring findings did not taken up by decision makers, the
project team did not follow up the translation of the findings in to practice, the evaluation
conducted are not well monitored for further improvement .

It is known that monitoring and evaluation is the back bone of the project performance, In
preliminary study no research is found in Compassion international Ethiopia monitoring and
evaluation system. So this research tries to identify the monitoring and evaluation system and
gaps of the method in this Organization.

Compassion International Ethiopia financed projects do not have monitoring and evaluation
officer at the project level. The projects beneficiary end result could not be evaluated with the

Page 4
donor compassion international Ethiopia. Based on the above problem, the research will address
the following research questions.

In response to the problem, the researcher proposes to make detail study to bridge the gap of
literatures within CIE funded projects, The gaps were identified by the researcher from the
research reviewed to find a study under the topic however; researcher couldn’t find much in
Ethiopian case or in the case of CIE financed projects. furthermore, the study which the
researcher reviewed in the case of international NGO’s are descriptive study and it doesn’t show
inferential analysis on the sector as well as cannot addressed the complex issues that limited our
understanding of emergent effects of monitoring and evaluation on project success factors in
CIE financed projects which are increasing affecting NGO’s poverty alleviation projects.

1.3. Research questions


 What are the methods of monitoring and evaluation practices in the selected projects of
Compassion International Ethiopia Program?
 How monitoring and evaluation practices in the selected projects contributed to the
project success/contributed to the achievement of project objectives?
 What are the gaps identified in the existing monitoring and evaluation practices which
need to be improved for future programming?

1.4. General Objective


The aim of the research is to examine the effect of monitoring and evaluation in project success
in Compassion International Projects.

1.4.1 Specific Object the specific objectives of this research are:


 To assess the monitoring and evaluation methods in compassion International Ethiopia
Projects
 To examine the effect of monitoring and evaluation practices on project success.
 To identify the gaps of monitoring and evaluation into the CIE

1.5. Hypothesis
HO: Monitoring and evaluation practice doesn’t has positive and significant effect on project success
HA: Monitoring and evaluation practice has positive and significant effect on project success
H0: Monitoring and evaluation competency doesn’t has positive and significant effect on project success
HA: Monitoring and evaluation competency has positive and significant effect on project success

Page 5
HO: Downward accountability doesn’t has positive and significant effect on project success
HA: Downward accountability has positive and significant effect on project success
HO: Project life cycle doesn’t has positive and significant effect on project success
HA: Project life cycle has positive and significant effect on project success

1.6. Limitation of the study


The researcher faced potential weakness of the study. The major limitation is doesn’t included
the wide array of project management practice and specific analogies. This paper also doesn’t
cover detail financial and work analyses of the organizations performance rather it will analyze
measure undertaken to avoid ineffectiveness of project implementations. Even though this
research has tried to assess the actual trend of monitoring and evaluation CIE projects success
based on the presented evidences, it also has the below limitations to grant acceptability and
implement changes in the system.

The study focused on factors affecting monitoring and evaluation practice in NGO’s project
success on the aspects of project management that affects the project success, but there are
various factors that causes as internal and external factors.

Lack of adequate resource for the research could also be taken as important limitation in this
research as increasing sample size or incorporating additional research designs require much
more resource both in terms of time, cost and quality.

The scope of the research is on projects handled in Addis Ababa city only, whereas there are
various projects that are being carried out of Addis Ababa city by CIE, Due to time and cost not
included, so for the future studies these projects should be included for vast view.

1.7. Scope of the Study


The scope of this paper is limited on the general factors affecting monitoring and evaluation
practice in CIE NGO’s which had implemented for the last five years’ projects. Since monitoring
and evaluation and project success is a vast topic in content, it cannot be exhaustively discussed
in this research. Therefore, the study is delimited particularly to assess the practice of monitoring
and evaluation in CIE financed NGO’s projects in Addis Ababa city only. The respondents were
program staff members such as senior program management team, monitoring and evaluation
staffs, project managers and project staffs who have more than one year of experiences in

Page 6
Compassion International Ethiopia. Particularly, the research focused on program staff members
that have in-depth knowledge on both project management and monitoring and evaluation.

1.8. significant of the study


The research findings will help CIE financed projects to improve understanding on affecting
factors of project monitoring and evaluation practice on project success and also help to avoid
some of project problems and this study has analyzed the actual scenarios of projects in CIE
financed projects. Moreover, to contribute towards a reduction on the rate of projects' non-
successful/ failure by providing relevant information that would help to improve the success of
the projects in CIE. In a broader view this research is significant in alleviating the problems
facing CIE financed projects. Create an understanding and foundation for future academic
studies on studding factors affecting monitoring and evaluation practice on project success in
NGO’s by identifying and studying the affecting factors of project success. Furthermore, By
identifying challenges and cause of ineffectiveness in the area and suggest measures to be taken
to overcome as well as other researchers may also use this study as a bench mark or as a
reference in their studies.

1.9. Organization of the Study


This paper would be organized in five chapters. Chapter one is the introduction which includes:
back ground of the study, statement of the problem, objectives of the study, significance of the
study, scope of the study, limitation of the study and definition of terms. Chapter two would
contain review of related literatures. Chapter three research design and methodology, includes
data sources, sampling techniques, data gathering tools and procedures of data collections.
Chapter four includes data presentation, analysis and interpretation and finally, chapter five
would contain summary of major findings, conclusion and recommendation of the study.

Page 7
CHAPTER TWO
2. REVIEW OF RELATED LITERRTAURE
2.1. Theoretical Review
2.1.1 Monitoring and evaluation system
Monitoring and Evaluation System Project monitoring and evaluation effectiveness is dependent
on the approach of monitoring and evaluation, the monitoring and evaluation competency,
downward accountability and sound involvements of monitoring and evaluation in project life
cycle. There are various monitoring and evaluation approaches that have been singled out
through literature review. The monitoring and evaluation approaches identified from the
literature are explained in the following paragraphs. Various monitoring and evaluation
approaches and tools have been used in the development sphere and have undergone changes in
parallel with dominant development paradigms in the development discourse Hummel Brunner,
R. (2010).

The main monitoring and evaluation approaches are currently based on the positivist and
constructivist paradigms. The former are linear, rigid and quantitative approaches, while the later
are more nonlinear and qualitative, allowing room for measuring complex process (Rogers
2012). Some believe that the combination of these methods can work best, while others insist
that fusion of these tools is not possible as they are completely different (Earl et al. 2001).

The Balanced Scorecard is another approach that can be employed in evaluating projects.
Balanced Scorecard evaluates projects on the basis of four perspectives which are the financial
perspective, customer perspective, Internal Business Process, and Learning & Growth. Alhyari et
al. (2013) found out that balanced score card approach fitted very well with monitoring and
measuring the performance of e-government in Jordan, and also in evaluating their success in IT
project investments. The balanced scorecards in INGO context of Ethiopia are rather the work of
ESAP (Anteneh, 2015, Ethiopian Social Accountability Program) part. Hence, the focus areas of
this research is to look at the role of monitoring and evaluation more specifically in relation to
project life cycle, accountability, monitoring and evaluation system and competency towards
achieving the success of the project.

Page 8
Logical framework (Log Frame) is one of the most common approaches used in project
management for both planning and monitoring of projects. Log Frame matrix is a tool that is
applicable for all organizations both governmental and nongovernmental that are engaged in
development activities (Middleton, 2005; Martinez, 2011). Hummel Brunner, R. (2010) further
confirms the continued use of Log Frame despite several criticisms. He asserts that Log Frame’s
Approach has not been fundamentally weakened by critics.

Even though many donors acknowledge its limits and weaknesses, they still maintain its use as a
planning and monitoring tool. Myrick (2013) expresses that a pragmatic approach to monitoring
and evaluation is ideal however in the real world practitioners may be limited by constraints that
will prevent their continued use of either a log frame or some overly pragmatic approach to M &
E. Myrick (2013) further explains that whatever the approach used, at least the basic principles
for monitoring and evaluation which are measureable objective, performance indicator, target
and periodic reporting should be used in a reporting tool. The advantages of a Log frame include
simplicity and efficiency in data collection, recording and reporting. However, the Log Frame
has faced the following criticism around its linearity, rigidity and stifling of creative and
innovative working system.

Conditions and efforts have to be made to modify the logical framework through inclusion of
more participatory learning elements. Hence, this study will try to look at what monitoring and
evaluation practices help to measure the outcomes and impact correctly which consequently
contribute to the project success, Myrick (2013).

Development activities (Middleton, 2005; Martinez, 2011), Hummel Brunner, R. (2010) further
confirms the continued use of Log Frame despite several criticisms. He asserts that Log Frame’s
Approach has not been fundamentally weakened by critics. Even though many donors
acknowledge its limits and weaknesses, they still maintain its use as a planning and monitoring
tool. Myrick (2013) expresses that a pragmatic approach to monitoring and evaluation is ideal
however in the real world practitioners may be limited by constraints that will prevent their
continued use of either a log frame or some overly pragmatic approach to M & E. Myrick (2013)
further explains that whatever the approach used, at least the basic principles for monitoring and
evaluation which are measureable objective, performance indicator, target and periodic reporting
should be used in a reporting tool.

Page 9
The advantages of a Log frame include simplicity and efficiency in data collection, recording
and reporting. However, the Log Frame has faced the following criticism around its linearity,
rigidity and stifling of creative and innovative working system. Conditions and efforts have to be
made to modify the logical framework through inclusion of more participatory learning
elements. Hence, this study will try to look at what monitoring and evaluation practices help to
measure the outcomes and impact correctly which consequently contribute to the project success.

2.1.2 Monitoring and Evaluation in project management


Achieving development results, as most realize, is often much more difficult than imagined. To
achieve development results and changes in the quality of people’s lives, governments, UNDP
and other partners will often develop a number of different plans, strategies, program and
projects. These typically include: A National Development Plan or Poverty Reduction Strategy
Sector-based development plans A United Nations Development Assistance Framework
(UNDAF) A corporate strategic plan (such as the UNDP 2008-2011 Strategic Plan) Global,
regional and country program documents (CPDs) and country program action plans (CPAPs)
Monitoring and evaluation (M&E) frameworks and evaluation plans Development and
management work plans Office and unit specific plans Project documents and annual work
plans

However, good intentions, large program and projects, and lots of financial resources are not
enough to ensure that development results will be achieved. The quality of those plans, program
and projects, and how well resources are used, are also critical factors for success.

To improve the chances of success, attention needs to be placed on some of the common areas of
weakness in program and projects. Four main areas for focus are identified consistently:

1. Planning and program and project definition—Projects and programs have a greater chance of
success when the objectives and scope of the program or projects are properly defined and
clarified. This reduces the likelihood of experiencing major challenges in implementation.

2. Stakeholder involvement—High levels of engagement of users, clients and stakeholders in


program and projects are critical to success.

3. Communication—Good communication results in strong stakeholder buy-in and mobilization.


Additionally, communication improves clarity on expectations, roles and responsibilities, as well

Page
10
as information on progress and performance. This clarity helps to ensure optimum use of
resources.

4. Monitoring and evaluation—Program and projects with strong monitoring and evaluation
components tend to stay on track. Additionally, problems are often detected earlier, which
reduces the likelihood of having major cost overruns or time delays later.

Good planning combined with effective monitoring and evaluation can play a major role in
enhancing the effectiveness of development program and projects. Good planning helps us focus
on the results that matter, while monitoring and evaluation help us learn from past successes and
challenges and inform decision making so that current and future initiatives are better able to
improve people’s lives and expand their choices.( UNDP, 2009&2010)

2.1.3 The Benefits of Monitoring and Evaluation for Organizations


Monitoring involves tracking progress over time during the whole knowledge management
process. Evaluation can be a very powerful tool of learning and change, because more than
training or development work it puts the needs and experiences of users and potential users and
the purpose and values of the project, at the run of change process. But it is also very political.
(sarah, 2006).

According to sarah , (2006) Evaluation work can: Improve effectiveness in the way your
organization meets local needs; Identify areas for improvement in your service to users; Attract
resources; Help share learning and experience across the organization; Improve accountability to
users, members and funders; Give greater work satisfaction for all managing body members and
staff; Volunteers; Celebrate progress and achievement; Identify changes or new directions; Make
the case for new resources.

Monitoring and evaluating program or project performance enables the improved management
of the outputs and outcomes while encouraging the allocation of effort and resources in the
direction where it will have the greatest impact. M&E can play a crucial role in keeping projects
on track, create the basis for institutional learning and create an evidence base for current and
future projects through the systematic collection and analysis of information on the
implementation of a project (IFC, 2008).

Page
11
2.1.4 Purposes of Monitoring and Evaluation for Public Organization
If organizations are to carry out effective M&E around capacity building, a key first question to
address is ―what is the purpose of that M&E? The usual answer to this is a combination of
accountability and learning in order to improve performance (Nigel S & Rachel S, 2010).
Monitoring and evaluating organization practices are necessary to improve and enhance the
quality of existing programs; NGOs are facing increasing requirements to provide evidence to
support their performance.

According to McDonald (2003), monitoring and evaluation helps organizations to: Assess
efficiency and effectiveness of a program; Refine and improve an existing program; Decide
whether to continue or replicate an initiative; Contribute to the established evidence base; and
Justify the program or initiative and to help procure further funding. For these reasons, it is
important that organizations devote resources towards improving their monitoring and evaluation
process, as well as their capacity. (Eccles &Gootman, 2002).

2.1.5. Introducing the 10-Step Model for Building a Results-Based M&E System
Although experts vary on the specific sequence of steps in building a results-based M&E system,
all agree on the overall intent. For example, different experts propose four- or seven-step models.
Regardless of the number of steps, the essential actions involved in building an M&E system are
to: Formulate outcomes and goals, Select outcome indicators to monitor, Gather baseline
information on the current condition, Set specific targets to reach and dates for reaching them,
Regularly collect data to assess whether the targets are being met and Analyze and report the
results. (New Delhi,pp. 24–31)

Given the agreement on what a good system should contain, why are these systems not part of
the normal business practices of government agencies, stakeholders, lenders, and borrowers?
One evident reason is that those designing M&E systems often miss the complexities and
subtleties of the country, government, or sector context. Moreover, the needs of end users are
often only vaguely understood by those ready to start the M&E building process. Too little
emphasis is placed on organizational, political, and cultural factors.

Step 1. Throughout, the model highlights the political, participatory, and partnership processes
involved in building and sustaining M&E systems, that is, the need for key internal and external
stakeholders to be consulted and engaged in setting outcomes, indicators, targets, and so forth.

Page
12
Step 2 of the model involves choosing outcomes to monitor and evaluate. Outcomes show the
road ahead.

Step 3 involves setting key performance indicators to monitor progress with respect to inputs,
activities, outputs, outcomes, and impacts. Indicators can provide continuous feedback and a
wealth of performance information. There are various guidelines for choosing indicators that can
aid in the process. Ultimately, constructing good indicators will be an iterative process.

Step 4 of the model relates to establishing performance baselines- qualitative or quantitative—


that can be used at the beginning of the monitoring period. The performance baselines establish a
starting point from which to later monitor and evaluate results.

Step 5 builds on the previous steps and involves the selection of results targets, that is, interim
steps on the way to a longer-term outcome. Targets can be selected by examining baseline
indicator levels and desired levels of improvement.

Monitoring for results models

Step 6 of the model, includes both implementation and results monitoring. Monitoring for results
entails collecting quality performance data, for which guidelines are given.

Step 7 deals with the uses, types, and timing of evaluation.

Reporting findings,

Step 8, looks at ways of analyzing and reporting data to help decision makers make the necessary
improvements in projects, policies, and programs.

Step 9, using findings, is also important in generating and sharing knowledge and learning within
governments and organizations.

Finally, Step 10 covers the challenges in sustaining results-based M&E systems including
demand, clear roles and responsibilities, trustworthy and credible information, accountability,
capacity, and appropriate incentives.

The use of such results-based M&E systems can help bring about major cultural changes in the
ways that organization and governments operate. When built and sustained properly, such

Page
13
systems can lead to greater accountability and transparency, improved performance, and
generation of knowledge.(Ten steps--- Book , Jody ZallKusek , Ray C.Rist (The world Bank
2000, page 23-25).

2.3. Empirical Review


The empirical literature provides empirical evidences of monitoring and evaluation practices and
project successes in CIE. Additionally, at the end of this section the conceptual frame of this
study is presented. In order to bring projects into successes, MoFED (2008: Pg. 10 -11)
conducted assessment on public sector monitoring and evaluation systems in the context of
Ethiopia most of the project success factors are quite related to monitoring and evaluation,
functions and systems which the researcher highlighted as follows;

 In the project cycle management, the attention given to monitoring and evaluation is
inadequate resulting from the insufficient resource allocation as well as the insufficient
skills and experience;

 The roles and responsibilities of monitoring and evaluation are not clear, it is usually
considered as externally imposed obligations by donor and hence the monitoring and
evaluation team gets busy on mechanical aspects such as supporting the project managers
only in data collection and report writing
 Monitoring and evaluation system is too dependent on donor assistance and it will
collapse when the funding is terminated. The system is in place without a thorough
analysis and hence relevant issues are not incorporated

 The expectation from monitoring and evaluation is very high and it demands much
information to be collected. This information lacks in considering the outreach, effect and
impacts but rather focus only financial and physical aspects of the projects and hence the
monitoring and evaluation information is of poor quality. It is also rather irrelevant as
compared to the actual monitoring and evaluation functions;

 There was insufficient, untimely or a lack of feedback and also the needs and aspirations
of stakeholders are overlooked and invisible in monitoring and evaluation;

Page
14
 There was a lack of integrations and cooperation between project monitoring and
evaluation and other project management and more importantly poor accountability for
failures; and;

 Monitoring and evaluation findings and lessons learnt are not taken in to consideration
for future project design and programming

2.3.1. Project Success


Successful Project implementation is complex, usually requiring simultaneous attention to a wide
variety of human, budgetary, and technical variables. As a result, the organizational project
manager has responsibility to handle all of the elements essential for project success. In addition,
projects are implemented in the dynamic environment therefore identifying factors that are
critical to project success can help to focus on important areas and set differential priorities
across different project elements (pinto & Slevin,1987).

Factor of project successes or failures are not only the issues of developing countries but also the
developed ones though it seems associated with only the former ones. Ethiopia has commenced
socio economic and political system management since mid-1930s from feudo – capitalist to
socialist oriented and market oriented with decentralized management.

In the three systems, the public sectors have played a leading role in the planning, execution,
monitoring and evaluation and close out of projects. According to Temesgen, 2007, the public
sectors progress report findings on the project implementation showed that projects were over or
under budgeted and did not complete within the planned period. Furthermore, the researcher
noted that most projects failed due to the institutional management difficulties, problems related
to policy and resources and technical related problems.

The reason behind project failure in Ethiopian public sectors is project evaluations and poor
planning as researched by Getachew (2010). This limited the attention given to evaluation both
at strategic and grass root levels. Considering evaluations as impositions from donors resulted
the lack in commitment, poor communication in project, program, and impact of policies in
designing information collection platforms. Other results of this attitude include: lack in
integrations amongst different actors in the evaluation systems at a diverse level; evaluation
findings and lessons learnt not being used for programming and making informed decisions,

Page
15
narrowing the scope of evaluation only to physical report and financial dimensions; limiting
capacity of evaluations at both individual and systematic level.

One of the major factors in project failure in Ethiopian public sectors is weak project monitoring
and evaluation. However, the project monitoring and evaluation system should be well designed
in order to track progresses, improve the intended level of efficiency, to keep the project on
course and to examine whether or not projects are up to meet the objectives (MoFED, 2008).

2.3.2. Monitoring and Evaluation


Practice If you do not measure results, you cannot tell success from failure (World Bank, 2004).
“We cannot control what we cannot measure”. Donors have clear guidelines on monitoring and
evaluation where all stakeholders must be involved in the monitoring and evaluation process.

2.3.3. Project Life cycle stage


According to SCI (2016), A project is a package of measures limited or capable of limitation in
regional, social, subject and temporal terms by the partner and possibly other institutions in order
to reach an objective that has been precisely designated beforehand and is objectively verifiable.
A project may be part of an overarching program.

The Project Life Cycle refers to a logical sequence of activities to accomplish the project’s goals
or objectives. Regardless of scope or complexity, any project goes through a series of stages
during its life. There is first an Initiation or Birth phase, in which the outputs and critical success
factors are defined, followed by a Planning phase, characterized by breaking down the project
into smaller parts/tasks, an Execution phase, in which the project plan is executed, and lastly a
Closure or Exit phase, that marks the completion of the project.

As of June 30, 2016 the country office has incorporated the following indicators related to
project management, advocacy and policy development, project quality and budget as of the KPI
where the line managers should sit together with the one to one session and continuously assess
and strengthen the capacity of the staffs. The detail implementation plan, monitoring and
evaluation Plan, budget versus accomplishments, phased budget, and IPTT (Indicator
Performance Tracking Table) are some of the deliverables expected from the project managers.
In all the stages of the project life cycle, the role of monitoring and evaluation as well as the
project team has to work hand in hand to change the lives of children.

Page
16
2.4. Conceptual Framework

The framework depicts the relationships between monitoring and evaluation and project success
as mediated by management support. It is conceptualized that the factors influencing project
success are effective strength of monitoring team, approach used by monitoring and evaluation
team in evaluating projects, accountability specified as information sharing, participation and
complaint and response mechanism; and the stage of project lifecycle. The monitoring and
evaluation activities, accountability and project success are all geared towards achievement of
value addition to the organization.

This emphasis on constant re-evaluation of the effects of work including networking and
advocacy allows program staff to hold themselves and their program to higher standards of
accountability and impact. It also empowers them to prioritize learning as a valued outcome that
is essential to quality programming. By presenting monitoring and evaluation as much more than
reporting, i.e. as a tool for re-planning throughout the program cycle, the researcher begins to see
it as the engine room of the change that the project seeks. Finally, the tool is heavily visual and
has been produced with engaging illustrations that make it very well suited to translation.
Independent variables

Monitoring and evaluation


system

Project success
Monitoring and
evaluation competency -quality
-cost
-time
Downward accountability

Project life cycle stage

Source; own constructed (2020)

Page
17
Chapter Three
3. Research Design and Methodology
According to Kothari, (2004) the formidable problem that follows the task of defining the
research problem is the preparation of the design of the research project, popularly known as the
“research design”. Decisions regarding what, where, when, how much, by what means
concerning an inquiry or a research study constitute. A research design is the arrangement of
conditions for collection and analysis of data in a manner that aims to combine relevance to the
research purpose with economy in procedure. In fact, the research design is the conceptual
structure with in which research is will conducted; it constitutes the blue print for the collection,
measurement and analysis of data.

3.1. Research Design

Explanatory and Descriptive research design were used for this research as it enabled the
researcher to describe Monitoring and Evaluation practice in Compassion International Ethiopia
projects successes. The research objective is also assess and explain whether the Monitoring and
Evaluation practice are contributing to the success of the projects.

3.2. Research Approach


Both qualitative and quantitative research method is used to conduct this particular study, the
qualitative method of research explains the experience of people in detail and permits to study
and understand people in detail in their own perception. These approaches, in combination, allow
gathering complementary information on the issue and help to make the existing situation to be
comprehensible. The focus of the study is primarily quantitative and supported with qualitative
approach where the logical flow of the analysis permits to interpreting the near-term impacts on
project success and anticipated long-term effects of projects.

3.2. Data source


The study was conducted by gathering relevant and appropriate information on the role of
monitoring and evaluation for success. The study has used quantitative methods by collecting
primary and secondary data. Relevant data and information were also gathered from senior and
middle level managers, directors and monitoring and evaluation experts. The primary and
secondary sources helped to triangulate data from different perspectives regarding the research
problem. The secondary sources of information used to provide the conceptual framework and

Page
18
acquire a general picture of the problem. While the collection of the required data and
information from the primary sources, questionnaire were used to get information on framework
of the study.

3.5. Target Population


The target population of the study was 142 compassion International Ethiopia Head office senior
manager, Department managers, monitoring and Evaluation experts, project partnership
facilitators and project Directors and other project workers.

3.6. Sample and Sampling Techniques


Sampling is defined as the selection of some part of an aggregate or totality on the basis of which
a judgment or inference about the aggregate or totality is made. In other words, it is the process
of obtaining information about an entire population by examining only a part of it. In most of the
research work and surveys, the usual approach happens to be to make generalizations or to draw
inferences based on samples about the parameters of population from which the samples are
taken. The researcher quite often selects only a few items from the universe for his study
purposes. All this is done on the assumption that the sample data enabled to estimate the
population parameters. Sample should be truly representative of population characteristics
without any bias.

Employees included in

No of employee the sample

The organization has 1. Partnership Department 42 25

2. Program Department 38 6

3. Business Department 22 4

4. Four projects 40 20

Total 142 65

Page
19
The above numbers of employee are not involved directly in monitoring and evaluation
processes, only those who have actively part of monitoring and evaluation using direct related
employees have gave a better result. So the researcher is used Non –probability sampling for 65
employees that were directly involved in the Head office and at the projects. The selected
employees are partnership facilitator, department managers and project Directors.

3.3. Data Collection Procedure

The primary data was collected by the researcher through survey questionnaire, key informant
interview and they was self-administered and secondary data were collected and merged with the
primary data. The primary sources include: Compassion International senior management
team, middle level managers and monitoring and evaluation experts by employing both
questionnaire and key informant interview. Secondary data sources include: different records of
the organization’s narrative annual reports, evaluation reports, audit reports, monitoring visit
reports and related documents

3.4. Data Gathering Instruments


3.4.1. Questionnaire
A survey questionnaire was prepared and administered to senior management team members,
middle level managers and monitoring and Evaluation experts. The questionnaire contains
mainly closed ended and few open ended questions. It is an appropriate instrument to obtain
variety opinions within a relatively short period of time. The questions rating has being
depending on the type of questions and choices given. Since the media of communication of the
international organization is English, the questionnaires were constructed in English. The
questionnaire consisted of different parts mainly focusing on the monitoring and evaluation
practices and its contribution to project success.

3.6. Method of Data Analysis


The study would use multiple regression as well as descriptive statistics to see the effect of
independent variable on the dependent variable. The term analysis refers to the computation of
certain measures along with searching for patterns of relationship that exist among data-groups.
Thus, in the process of analysis, relationships or differences supporting or conflicting with
original or new hypotheses should be subjected to statistical tests of significance to determine

Page
20
with what validity data can be said to indicate any conclusions. Both quantitative as well as
qualitative techniques of data analysis would uses percentage, tables and charts with the help of
IBM SPSS Statistics version 23 statistical computer software.

3.7. Regression model


Linear regression equation was used to find out what relationship use for this research if any,
exists between the independent variables and the dependent variable.

Y = β0 + β1X1 + β2X2 + β3X3 + β4X4 + ε

Dependent variable Y= project success


Independent variable β0= is the regression coefficient/constant/ Y-intercept
Independent variable X1= monitoring and evaluation practice

Independent variable X2= monitoring and evaluation competency

Independent variable X3= downward accountability

Independent variable X4= project life cycle

While β1, β2, β3, and β4 are coefficient of each independent variable and € is the error term
3.8. Reliability and Validity
According to Saunders et al. (2009), internal validity in relation to questionnaires refers to the
ability of the questionnaire to measure what the researcher intends it to measure. To achieve this,
questions in the questionnaire are emanated from the broad research questions tailored to meet
research objectives.
Content validity, on the other hand, refers to the extent to which the measurement device, in this
case the measurement questions in the questionnaire, provides adequate coverage of the
investigative questions. This is achieved by providing a 5 scale likert scale for addressing a range
of alternatives.
Criterion-related validity, sometimes known as predictive validity, is concerned with the ability
of the measures (questions) to make accurate predictions. This is achieved by providing a range
of different sets of questions that cover main project success issues at the same time giving rich
and in-depth information.

Page
21
Reliability, on the other hand, refers to consistency. It refers to the extent to which the data
collection techniques or analysis procedures will yield consistent findings. According to Gliem
(2003), when using Likert-type scales it is essential to calculate and report coefficient for internal
consistency reliability. But because Cronbach’s alpha does not provide reliability estimates for
single items, the analysis of the data must use the summated scales or subscales and not
individual items. In this study, Cronbach’s alpha test is calculated for the 22 Likert-style items
using SPSS statistical software and the result is presented in the following table.
Table 3.1: Cronbach's Alpha reliability test

Reliability test
Cronbach's
Alpha
No of items

0.837 22
Source: Own survey 2020

Cronbach’s alpha measures the reliability of research tools. For this study the Alpha coefficient
for the overall scale calculated as a reliability indicator is .837. The values of Cronbach’s alpha
more than 0.7 is good. The alpha values in this study are far more than 0.7 and which are;
therefore, it had very good reliability for the questionnaires.

3.10. Ethical Consideration


Considering the importance of ethics in research work, the researcher ensured that high level of
ethics is reflected as much as possible. The participants would be approached and request their
willingness to involve in the study before the actual data gathering date. The researcher would
ensure that participates the idea of the study and its purpose beforehand. Furthermore, the
researcher maintains the respondents’ right to decline to answer a question or to participate in
any activity or to refuse to discuss any topic if they have felt uncomfortable. Whatever
information in the interviews and discussions are also kept confidential.

Page
22
Chapter Four
Results and Discussions
4.1. Data presentation and interpretation
There are several software packages for processing quantitative data some of which are broader
in scope and user friendly like the SPSS. After data are collected in a manner that can enable the
researcher to have concrete information to address the objective of the study, it was edited, coded
and entered in to a Statistical Package for Social Science (SPSS) version 20 for analysis. In the
scope of the survey, 65 questionnaires were distributed to employees in International Ethiopia
Head office senior manager, Department managers, monitoring and Evaluation experts, project
partnership facilitators and project Directors and other project workers. The data collected from
them were later used to assess project success. Moreover, the responses of the subjects are
presented, analyzed, and interpreted using SPSS 20, reliability tests, and other descriptive
statistics such as Mean, and standard deviation. . Out of a total of 65 respondents, 58 (89.2 %)
filled and return the questionnaires .Therefore it can be concluded that majority of the
respondents returned the questionnaire with answers. Therefore, the researcher used all the
questionnaires returned.

4.2. Descriptive statistical Analysis


In this section the descriptive analysis part is presented, the researcher used frequency, percentage, and
standard deviation to show the results obtained from the primary data sources.

4.2 .1.Demographic Characteristics of the Respondents


This includes respondents’ sex category, level of education, employment status. This helps to
understand that from which age group, sex category, and level of education the data were
obtained. Besides it also helps to know their employment status.
4.2.1.1. Gender distribution
Table 4.1. Participants gender
Category Frequency Percent

Vali Male 51 86.4


d Female 8 13.6
Total 59 100.0

Source; own survey 2020

Page
23
Table 4.1 shows: As the gender profile shows by frequency and percentage in table 1, out of the
59 respondents 51 are male while 8 were female with a percentage of 86.4% and 13.6%
respectively. Their proportion is large; the organization is advised to encourage the involvement
of female.

4.1.1.2 Educational qualification


Table 4.2. Educational qualification of respondents
Category Frequency Percent

PhD 1 1.7
MBA/MSC 27 45.8
Valid BA/BSC 22 37.3
Diploma and below 9 15.3
Total 59 100.0

Source; own survey 2020


As a Table 4.2 indicates that 1 respondent which constitutes 1.7% are holders of PhD degree and
27(45.8%) respondents out of 59 have MBA/MSC while 22(37.3%) are qualified with BA/BSC.
Moreover, 9(15.3%) respondents out of 59 have Diploma and below. From this it can be inferred
that most of the respondents were educated and can better understand the questionnaire and filled
properly.
4.1.3 Respondents Current Position
Table 4.3. Current position
Category Frequency Percent
Technical team leader 13 22.0
Head of Thematic Sector 12 20.3
program Manager 14 23.7
Valid Program Specialist 7 11.9
MEAL Manager 9 15.3
Humanitarian Response 4 6.8
Total 59 100.0

Source; own survey 2020


The above table 4.3: shows that respondent’s current position. The result indicates the majority were in the
category of Technical team leader 13(22%) ,Head of Thematic Sector ,12(20.3%) ,program Manager,14(23.7%) ,Program Specialist,

7(11.9%) position t. The study noted that, the organization has competent and easy to learn qualified
employees.

Page
24
4.3.1 Descriptive analysis for Measurement of Project Success
Table 4.4.Measurement of Project Success
s.n Mean Std. level Rank
Deviation

2.1 Projects are completed at the planned time 4.24 0.703 agree 3

2.2 Projects are completed within the planned budget 4.29 0.671 agree 2
2.3 Project have national as well as international quality 4.37 0.667 agree 1
standard that must be met
2.4 Project beneficiaries are satisfied and impacted 4.15 0.715 agree 4
positively

2.5 Projects realized meet the planned objective and 4.12 0.853 agree 5
outcome that are intended to achieve

Source; own survey 2020

Table 4.4 shows that In order to find out respondents opinion about the project success in
projects, the following summary of respondents is discussed as follows. As can been seen in
Table 4 regarding their agreement on the level of Projects are completed at the planned time,
majority of respondents above 91.5 % agreed that , Projects are completed at the planned time.
The mean value (4.24) for this factor also shows that, this kind of relationship is major
contributing factor for success in project

The researcher also directly raised a question regarding the Projects are completed within the
planned budget, based on this the majority (91.5%) of respondents agreed Projects are completed
within the planned budget. The mean value (4.29) also support this opinion that from project
success actors factors, this factor is the second factor that affect success or performance in
projects

As seen from the above table, majority(93.3%) also agreed that , the Project have national as
well as international quality standard that must be met, that is having mean (4.37) .From here it

Page
25
can be concluded that meeting national as well as international quality standard , is an important
aspects and major factor or successful performance in Projects .

From the above table it is possible to see, majority(88.1%) also agreed that , Project
beneficiaries are satisfied and impacted positively , that is having mean (4.15) .From here it can
be concluded that satisfying Project beneficiaries , is an important aspects and major factor or
successful performance in Projects .
As seen from the above table, majority(83.1%) also agreed that , the Projects realized meet the
planned objective and outcome that are intended to achieve , that is having mean (4.12) .From
here it can be concluded that meeting planned objective and outcome that are intended to achieve
, is an important aspects and major factor or successful performance in Projects

4.3.2 Descriptive analysis for Monitoring and Evaluation Practices


Table 4.5 Assessment of Monitoring and Evaluation Practices, more specifically monitoring
and Evaluation System,
S.N Mean Std. Level Rank
deviation

3.1 The monitoring and evaluation 4.24 0.916 agree 4


System is effective, efficient and Contributes to achieve the project
objective
3.2 The scope and purpose of the monitoring and evaluation system is clear 4.22 0.872 agree 5

3.3 The monitoring and evaluation system is built with a thorough situational 4.42 0.770 agree 2
analysis
3.4 The monitoring and evaluation system has buy - in from the senior 4.44 0.650 agree 1
management team
3.5 The monitoring and evaluation system reflects the theory of change and 4.27 0.739 agree 3
supports the mission and vision of the organization

Source; own survey 2020

As a Table 4.5 shows that, With regard to “The monitoring and evaluation System is effective,
efficient and contributes to achieve the project objective”, the majority of respondents 89.9%
were agreed the statement having mean 4.24.
As seen from the above table, majority 88.2 % also agreed that, the scope and purpose of the
monitoring and evaluation system is clear that is having mean (4.22), most of respondents
93.2% also agreed that the monitoring and evaluation system is built with a thorough

Page
26
situational analysis, having mean (4.42) . Moreover, most of respondents, 91.5% responded to
agree that The monitoring and evaluation system has buy - in from the senior management team
with mean (4.44). From the above Monitoring and Evaluation Practices also, most of
respondents 89.9% agreed to The monitoring and evaluation system reflects the theory of
change and supports the mission and vision of the organization, having mean value of 4.27.
From the above findings it can be concluded: Overall Monitoring and Evaluation Practices in
the organization, is significant factor which is important aspects of and factor in project
performances.

4.3.3 Descriptive analysis for Monitoring and evaluation competency


Table 4.6 Assessment of Monitoring and evaluation competency
S.N mean Std. Level Rank
deviation
4.1 The organization has system in place to ensure that the children it aims to 4.46 0.652
assist and other stakeholders have access to timely, relevant and clear
agree 3
information about the organization, program, project and its activities
4.2 The organization has a system to analyze the information collected from 4.44 0.650 agree 4
stakeholders to further improve the quality of program

4.3 The organization has system in place to listen to the people it aim to 4.66 0.576 Strong 1
assist ,incorporating their views, concerns and influence the program
agree
decision in project cycle management

4.4 The organization has a system to build the capacity (knowledge, skills 4.58 0.622 Strong 2
and attitudes) of children to anticipate in project/program development,
agree

Source; own survey 2020

Table 4.6 indicated that, With regard to “The organization has system in place to ensure that the
children it aims to assist and other stakeholders have access to timely, relevant and clear
information about the organization, program, project and its activities”, the majority of
respondents 94.9% were agreed the statement having mean 4.46. As seen from the above table,
majority 94.8 % also agreed that, The organization has a system to analyze the information
collected from stakeholders to further improve the quality of program that is having mean (4.44),
most of respondents 97.5% also agreed that the organization has system in place to listen to the
people it aim to assist, incorporating their views, concerns and influence the program decision in

Page
27
project cycle management, having mean (4.66) . Moreover, most of respondents, 95.5%
responded to agree that the organization has a system to build the capacity (knowledge, skills
and attitudes) of children to anticipate in project/program development, with mean (4.58). From
the above Monitoring and Evaluation competency requirement all criteria have mean values
above 4 From the above findings it can be concluded: Overall Monitoring and Evaluation
competency in the organization , is significant factor which is important aspects of and factor in
project performances .
4.3.4 Descriptive Analysis on an assessment of Downward Accountability
Table4.7 assessment of Downward Accountability
S.N Mean Std. Level Rank
. Deviation
The organization has a system in place to 4.66 0.576 Strongly 1
5.1 agree
incorporate children's participation in
project/program development,
implementation, monitoring and
evaluation.
5.2 The organization has a system in place to 4.58 0.675 Strongly 2
agree
enable beneficiaries it aims to assist and
other stakeholders to provide feedback
and receive response through effective,
accessible and safe information sharing
mechanisms and processes
5.3 The organization has system in place to 4.39 0.743 agree 3

store, verify and analyze the feedback,


complaints and use for future
programming and take an input for quality
program delivery
Source; own survey 2020

Page
28
Table 4.7 shows that, With regard to “The organization has a system in place to incorporate
children's participation in project/program development, implementation, monitoring and
evaluation.”, the majority of respondents 98.3% were agreed the statement having mean 4.66. As
seen from the above table, majority 96.6 % also agreed that, The organization has a system in
place to enable beneficiaries it aims to assist and other stakeholders to provide feedback and
receive response through effective, accessible and safe information sharing mechanisms and
processes that is having mean (4.58).Moreover, most of respondents, 95.0% responded to agree
The organization has system in place to store, verify and analyze the feedback, complaints and
use for future programming and take an input for quality program delivery, with mean (4.39).
From the above Downward Accountability requirement all criteria have mean values above 4
from the above findings it can be concluded: Overall Downward Accountability in the
organization is significant factor which is important aspects of and factor in project
performances.
4.3.5 Descriptive Analysis on An assessment of project life cycle in your project
Table 4.8 an assessment of project life cycle in your project
S.N mean Std. Level Rank
deviation
6.1 The engagement of monitoring and evaluation staff in the 4.49 0.626 agree 3
initiation stages of project is high

6.2 The role of monitoring and evaluation in 4.56 0.534 Strongly 1


baseline development is high agree

6.3 The engagement of monitoring and evaluation staff in the 4.51 0.569 Strongly 2
planning stages of project is high agree
6.4 The engagement of monitoring and evaluation in the 4.44 0.595 agree 4
execution stages of project is high
6.5 The engagement of monitoring and evaluation 4.39 0.616 agree 5
in the evaluation stages of a project/program is high

Source; own survey 2020

As a Table 4.8 shows that, With regard to “The engagement of monitoring and evaluation staff in
the initiation stages of project is high”, the majority of respondents 96.6% were agreed the
statement having mean 4.49. As seen from the above table, majority 91.6 % also agreed that , the
role of monitoring and evaluation in baseline development is high that is having mean (4.56),
most of respondents 96.8% also agreed that the engagement of monitoring and evaluation staff in
the planning stages of project is high, having mean (4.51) . Moreover, most of respondents,

Page
29
98.7% responded to agree that The The engagement of monitoring and evaluation in the
execution stages of project is high with mean (4.44). From the above project life cycle in the
project also, most of respondents 96.6% agreed to the engagement of monitoring and evaluation
in the evaluation stages of a project/program is high, having mean value of 4.39. From the
above findings it can be concluded: Overall project life cycle in the project is significant factor
which is important aspects of and factor in project performances.

4.4. Inferential Analysis


This chapter exhibits an extensive inferential statistical analysis and their results. Inferential
Analysis is conducted using binary correlation and linear regression analysis, based on statistical
software SPSS. .When to predict membership of only continuous outcomes the analysis is known
as binary linear regression. This chapter focuses on the results and discussion, based on the
tables generated by SPSS.
4.4.1. Correlation analysis
Table 4.9 Correlation analysis
Correlations

Project Monitoring competency Downward Project life


Success and Accountability cycle
Evaluation

Project Success 1.000 .692 .496 .429 .377


Monitoring
.692 1.000 .683 .568 .504
Pearson Evaluation
Correlation Competency .496 .683 1.000 .630 .570
Downward
.429 .568 .630 1.000 .744
accountability
Project life cycle .377 .504 .570 .744 1.000

Project Success . .000 .000 .000 .002


Monitoring
.000 . .000 .000 .000
Evaluation
Sig. (1-tailed) Competency .000 .000 . .000 .000
Downward
.000 .000 .000 . .000
accountability
Project life cycle .002 .000 .000 .000 .

Project Success 59 59 59 59 59
Monitoring
59 59 59 59 59
Evaluation
N Competency 59 59 59 59 59
Downward
59 59 59 59 59
accountability
Project life cycle 59 59 59 59 59
Source; own survey 2020

Page
30
As a Table 4.9 indicated that For interpreting correlation coefficient intervals: 0 to 0.20 corresponds
to a very weak relationship; 0.21 to 0.40 corresponds to a weak relationship, 0.41 to 0.60 corresponds
to a moderate relationship, 0.61 to 0.80 corresponds to a strong relationship, and 0.81 to 1.00
corresponds to a very strong relationship, Cohen (2003).

Therefore, from the above correlation result illustrated in table4.9, it is possible to see that, there is
significant, positive and strong relation between Monitoring and Evaluation as a factor and project
success (r= .629, sig= .000). There is also significant, positive and moderate relation between
evaluators competency as a factor and project success (r= .495, sig= .000). There is also significant,
positive and moderate relation between Downward Accountability as a factor and project success (r=
.4290, sig= .000) . There is also significant, positive and weak relation between Project life cycle and
project success (r= .377 sig= .002). From the above correlation analysis it is possible to infer that all of
the above identified project success factors are correlated with project success which is measured in
terms of project accomplishment requirements.

4.4.2 .Regression Analysis

In this study, multiple regressions were conducted in order to examine the relationship between all of
the significantly correlated factors with another dependent variable project success..

In conducting the multiple regression analysis, several main assumptions were considered and
examined in order to ensure that the multiple regression analysis was appropriate (Hair et al., 2006).
The assumptions to be examined are as follow: (1) outliers, (2) normality linearity and
homoscedasticity, and (3) muliticollinearity In order to see outliers, it is needed to check Data
whether there are any potential outliers existing in the analysis. Pallant (2007) noted that “multiple
regression is very sensitive to outliers (i.e. very high or low score)” Thus, outliers should be removed
before running the regression analysis (Tabachnick & Fidell, 2007). Multivariate outliers can be
detected by using statistical methods such as case wise diagnostics .During conducting multiple
regression and Collinearity Diagnostics, 5 outlier was detected and removed. One of the assumptions
to be examined is normality linearity and homoscedasticity, In order to check normality a graph is
plotted using SPSS regression graph .The below graph shows the assumption of normality is
accepted,

Page
31
Figure1 histogram
Moreover, to check linearity, a graph is plotted using SPSS regression graph .The below graph shows the
assumption of linearity is met.

Figure 2 probability plot


So as to check assumption of homoscedacity or homogeneity, is plotted using SPSS regression graph and
the graph shows most of the data scattered are compacted in one area in homogenized pattern.

Page
32
Figure 3 scatter plot
The above Figures and graph show the assumption of linearity, normality and homoscedascitity have been
met. Moreover, in this case tolerance is much higher than 0 which is (0.373-0.500) - coefficient table4.12
.Hence, multicollinearity is not a threat to the substantive conclusions of this study and the B and Beta
coefficients are stable (Variance Inflation Factor) is (1.999- 2.688) simply the reciprocal of tolerance.
Therefore, when VIF is higher than 10, there is high multi co-linearity and instability of the B and Beta
coefficients. In this case since VIF are less than 10, thus multicollinearity is not a threat to the substantive
conclusions of this study and the B and Beta coefficients are stable
Table 4.10 Model summary of multiple regression
4.4.2.1Model Summaryb
Mod R R Adjusted R Std. Error Change Statistics Durbin-
el Square Square of the Watson
R Square F df1 df2 Sig. F
Estimate
Change Change Change

1 .693a .480 .442 .41538 .480 12.482 4 54 .000 2.108

a. Predictors: (Constant), project .life cycle , Monitoring and Evaluation , competency, Downward Accountability
b. Dependent Variable: Project Success

R2is a measure of how much of the variability in the outcome (in this case Project success is
accounted for by the predictors (i.e factors of project success), As shown in table 4.10, R2value is
0.48, which means that the mentioned factors of project success as a whole causes 48 % of the
variation in project success .. The ANOVA table also suggests that the model is quite significant

Page
33
in explaining the variances. The significance result at p < 0.05(0.000) provides support for the
significant.

4 variance of analysis
1. Table ANOVAa
ANOVAa

Model Sum of df Mean F Sig.


Squares Square

Regression 8.615 4 2.154 12.482 .000b

1 Residual 9.317 54 .173

Total 17.932 58

a. Dependent Variable: Project Success

b. Predictors: (Constant), Monitoring Evaluation , monitoring and evaluation


competency , Downward Accountability and project life cycle

Source; own survey 2020

As ANOVA Table shown on the other hand depicts that the model is a good fit
(F=12.482, DF1=4, DF2=54, p < 0.0001). That is, the sum of squares of
variation in the Project success due to the effects of the latent variables
(SSR=8.615) is more than the variation imposed by random effects (9.317).

Page
34
4.4.2.3 Coefficients
Table 4.12. Coefficients
Coefficientsa
Model Unstandardized Standardized t Sig. Collinearity Statistics
Coefficients Coefficients

B Std. Error Beta Tolerance VIF


(Constant) 1.107 .631 1.993 .049
Monitoring
3.645 .138 .649 4.682 .000 .500 1.999
Evaluation
1 competency 2.826 .165 .024 4.160 .000 .438 2.283
Downward
1.079 .161 .040 2.980 .038 .373 2.680
Accountability
Project life cycle 1.008 .202 .006 2.242 .047 .428 2.338
a. Dependent Variable: Project Success

As shown from the above coefficient table, B column shows the values for the regression for predicting
the dependent variable from the independent variable. Std. Error column shows the standard errors
associated with the coefficients. Beta (standardized coefficients) is a measure of how strongly each
predictor variable influences the criterion variable. These are the coefficients obtained if all of the
variables in the regression are standardized, including the dependent and all of the independent variables,
and the magnitude of the coefficients can be compared to see which one has more of an effect. The Beta
(β) coefficient is the standardized regression coefficients. Their relative absolute magnitudes for a given
step reflect their relative importance in predicting perceived model value.

The SPSS generated outputs as presented in table above the equation, (Y = β0 + β1X1 + β2X2 + β3X3 +β4X4
+ ε) becomes; Y= (1.107X0) + (3.645X1) + (2.826X2) + (1.079X3) + (1.008X4)

The latent variables such as monitoring and evaluation (t=4.682, p<0.0001), competency
(t=4.160, p<0.0001), downward accounts (t=2.980, p=.038), project life cycle (t=2.242, p=.047),
are statistically significant at 5 % level of significance factors of project effectiveness and
efficiency. Form the regression equation, all of factors taken to account; monitoring and
evaluation, competency, downward accounts and project life cycle.

Page
35
On the other hand, holding other factors constant, a unit change in monitoring and evaluation
when holding the other factors constant would lead to a 3.645 improvement in project success, a
unit change in competency when holding the other factors constant would lead to a 2.826
improvement in project success; a unit change in downward accounts when holding the other
factors constant would lead to a 1.079 improvement in project success while a unit change
project life cycle when holding the other factors constant would be lead to a 1.008 improvement
in project success.

Page
36
Chapter Five
Summary, Conclusion and Recommendations

5.1. Introduction
This chapter gives a summary of key findings of the study presented according to the objectives of the
study. Conclusions are drawn from the findings and recommendation is provided to help investigate the
role of monitoring and evaluation functions in achieving project success and also assesses the monitoring
and evaluation practices.

5.2. Summary of Key Findings


The findings showed that CIEt. monitoring and evaluation system is doing good in general terms and it
has also areas of improvements around integrating the monitoring and evaluation system from projects to
hubs and central country office system, the thinking of my project and my thematic has also influenced the
whole system as some did not see the bigger picture of the organization as a whole.

The monitoring and evaluation team are affected by the availability of budget, its effective utilization of
the budget as well as the absence of monitoring and evaluation staff. The role of monitoring and
evaluation towards the sustainability of projects are also given a weak weighted average mean which
implies the monitoring and evaluation system and the team competency have to help for a project to
sustain beyond the project period.

The research findings revealed that the complaint and response mechanisms and the child participation
have given a low weighted mean implying that CIE has to go a lot in terms of making the accountability
mechanisms more robust within the different projects and mandates. The other findings in this connection
is that there is no system for the staff of a project to raise concern with regard to management or
leadership as the only system we have is the anonymous confidential system which help to stop fraud.
The finding showed that there is a positive relationship between the role of monitoring and evaluation
functions and project success. This means that the monitoring and evaluation system is in place. It also
means that the role of monitoring and evaluation in project cycle management, the strengthening of the
monitoring and evaluation function in improving the downward accountability mechanisms and also the
monitoring and evaluation team competency are contributing to the success of projects.

Page
37
Thus the presence of a sound monitoring and evaluation system helps a lot in project success but its
absence does not necessarily result in project failure. The monitoring and evaluation contributions are
specified in using the installment of a system by recruiting a competent staff and continuously
strengthening the capacity, strengthening the internal accountability mechanisms as well as the sound
involvement of the monitoring and evaluation expert along the project cycle stages. There are actually
other parameters which can contribute to the project success but the dimensions researched have
contributed to the project success.

The monitoring and evaluation expert involvement along the project life cycle stages are of a varied
understanding saying some has to participate in the whole project life cycle, some still say only in the
baseline, evaluation and monitoring, still some pother say in the planning stage of a project. It is also
reflected from CIE project managers do not have a certified project managers and are not well conversant
on the tools and techniques that is why the monitoring and evaluation tools are not properly used as one of
the other project management tools. The multiple regression analysis models the linear relationships
between the dependent variables and the independent variables which were: the monitoring and evaluation
system, the monitoring and evaluation team competency, the downward accountability and project life
cycle stage. According to the results of the regression analysis, the independent variable explains of 47
percent of success (R2). The F statistics (ANOVA) for the model was 8.437 which was significant at 5
percent level of significance (P value was 0.000 which was less than 0.05). The coefficients table provides
the necessary information to predict success from monitoring and evaluation system, monitoring and
evaluation competency, strengthen downward accountability as well as life cycle stages.

5.3. Conclusion
The key role of monitoring and evaluation function is to provide evidence based feedback to the
management which helps as input for decision making and track the project progress. The research
problem that this study intends to address was that the role of monitoring and evaluation functions in
achieving project success.

In response to the research problem and hence answering the research questions, this study gathered and
analyzed data which has led to this conclusion. This research then concluded that generally projects
implemented by CIEt. are successful. The success of these projects was the results of strong monitoring

Page
38
and evaluation system, competent monitoring and evaluation team, strong downward accountability
mechanism and closely monitoring the projects at all stages of the project life cycle.

5.4 Recommendations
Based on the findings of the study, the researcher has given the following recommendations for CIE the
latter to take in its monitoring and evaluation strategic direction and future programming.
 The findings revealed that the budget allocated for M&E support specifically for monitoring and
evaluation experts as well as activities has not been adequate.
 The monitoring and evaluation practice will be improved if projects are implemented according to
the plan and concrete decisions are made on issues identified during project monitoring.
 Project and program managers do not use the M&E tools as one of the project/program
management tool. The researcher recommended that M&E tools should be part of the key
performance indicators where they will be accountable for taking actions or in actions.

Page
39
References
Abbasi, et al, 2014, Project Failure Case Studies and Suggestion, International Journal Computer
Applications

Abraham, T. H. (2004). Model Development for Improving the Performance of Projects: A


Case Study on Ethiopian Roads Authority, (ERA). Unpublished MSc Thesis, AAU,
Ethiopia.

Forbes, Daniel P. (1998): Measuring the unmeasurable: Empirical studies of nonprofit


organization effectiveness from 1977 to 1997. Nonprofit and Voluntary Sector
Quarterly 27 (2), pp. 183-202.

Ferris, James M.; Graddy, Elizabeth (1994). Organizational choices for public service

supply. Journal of Law, Economics and Organization. 11(1), pp. 126-141.

21 (5), 309-319.

Kultar.Singn (2007).Quantitative social research methods. Los Angeles, CA. Sage, organizing

your social science research paper:

Muriithi, N., & Crawford, L. (2003). Approaches to project management in Africa: implications

for international development projects. International Journal of Project

Management, 21 (5), 309-319.

MoFED (2008). National Economic Parameters and Conversion Factors for Ethiopia, (Third

Edition). Ministry of Finance and Economic Development, Addis Ababa:

Naidoo, I. A. (2011). The role of monitoring and evaluation in promoting good governance in

South Africa: A case study of the Department of Social Development (Doctoral

dissertation) University of Witwatersrand

Page
XL
Prabhakar, G. P. (2008). What is Project Success: A Literature Review International Journal of

Business and Management, 3 (9), 1-10.

Save the Children International (2016). Department of program development and quality (P
piloting report.DQ): Monitoring, evaluation, accountability and learning unit quality benchmark

Ten steps--- Book , Jody ZallKusek , Ray C.Rist (The world Bank ) page 23-25).

Page
XLI
Annexes
Questionnaire for M&E and Project Management Expert
Jimma University
School of Post Graduate Study
Questionnaire on “the role of monitoring and evaluation functions in achieving project success”
In Compassion international Ethiopia Assisted projects .

Questionnaire

Dear Respondent,

I am conducting a research on “the role of monitoring and evaluation in achieving project


success: Compassion international assisted projects. The purpose of the study is merely
academic. The general objective of the research is to assess the role of monitoring and
evaluation in project success Compassion international assisted projects and the specific
objectives are to assess the monitoring and evaluation practices and examine its contribution to
project success.

Your participation in this questioner is voluntary; you will not be paid for your participation.
You may withdraw from the study at any time without penalty or harm of any type. If you
decline to participate in or choose to not complete the questionnaire, the researcher will not
inform anyone of your decision, and no foreseeable negative consequences will result.
Completing the questionnaire will require approximately 25minutes. There are no known risks
associated with completing the questionnaire. If, however, you feel uncomfortable in any way
during this process, you may decline to answer any question, or not complete the questionnaire.
The researcher will not identify you by name in any report using information obtained from your
questionnaire; your confidentiality as a participant in this study will remain secure. Subsequent
uses of data generated by this questionnaire will protect the anonymity of all individuals.

Thank you very much for your time and cooperation.

Page
XLII
Part one General information about the respondent

1.1 full name of the respondent ( optional ) ----------------------------------------


1.2 sex Male Female
1.3 Education level and type 1) Ph.D. 2 ) MSC/MA 3) BA/BSC 4) Diploma
1.4 Current position held
1) Technical team leader 2) Head of Thematic Sector 3) program Manager
4) Program Specialist 5) MEAL Manager 6) Humanitarian Response

Part Two: assessment of project success

s.n Strongly Strongly


Disagree Neutral Agree
Disagree Agree
1 2 3 4 5

2.1 Projects are completed at the planned


time
2.2 Projects are completed within the
planned budget
2.3 Project have national as well as
international quality standard that must
be met
2.4 Project beneficiaries are satisfied and
impacted positively
2.5 Projects realized meet the planned
objective and outcome that are intended
to achieve

2.6 Are there any other project success indicators which are missed in the above list? If so,
please
Specify below: ----------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------

Page
XLIII
Part Three: Assessment of Monitoring and Evaluation Practices, more specifically monitoring
and Evaluation System,
SDisagree Neutral Agree Strongly
S.N t1 2 3 4 5
Agree
r
o
3.1 The monitoring and evaluation n
system is effective, efficient and ~
contributes to achieve the project
g .
objective l

3.2 y

The scope and purpose of the monitoring and evaluation


system is clear D
i
s
3.3
a
The monitoring and evaluation system is built with a
g
thorough situational analysis
r
e
3.4 The monitoring and evaluation system has buy - in from the
e
senior management team

3.5 The monitoring and evaluation system reflects the theory of


change and supports the mission and vision of the
organization

3.6. Can you give an example of a time when monitoring and


evaluation helped to achieve project success?

Part four : Assessment of Monitoring and evaluation competency

Strongly Disagree Neutral Agree Strongly Agree


S.N 1
Disagree 2 3 4 5

Page
XLIV
Monitoring and evaluation competency

4.1 The organization has system in place to ensure that ..


the children it aims to assist and other stakeholders
have access to timely, relevant and clear information
about the organization, program, project and its
42 The organization has a system to analyze the
activities
information collected from stakeholders to further
improve the quality of program
4.3 The organization has system in place to listen to the
people it aim to assist ,incorporating their views,
concerns and influence the program decision in
project cycle management
4.4 The organization has a system to build the capacity
(knowledge, skills and attitudes) of children to
participate in project/program development,

Part five: An assessment of Downward Accountability

Downward Accountability Strongly Strongly


Disagree Neutral Agree
. Disagree Agree

1 2 3 4 5

5.1 The organization has a system in place to


incorporate children's participation in
project/program development,
implementation, monitoring and
evaluation.
5.2 The organization has a system in place to
enable beneficiaries it aims to assist and
other stakeholders to provide feedback and
receive response through effective,
accessible and safe information sharing
mechanisms and processes

Page
XLV
5.3 The organization has system in place to
store, verify and analyze the feedback,
complaints and use for future programming
and take an input for quality program
delivery

5.4. What do you think is the role of monitoring and evaluation to improve the
downward accountability mechanisms? -------------------------------------------

Part six: An assessment of project life cycle in your project

Strongly
Disagree Neutral Agree Strongly Agree
S.N
Disagree

1 2 3 4 5
project life cycle in your project
6.1 The engagement of monitoring and evaluation staff
in the initiation stages of project is high
6.2 The role of monitoring and evaluation in baseline
6.3 development is high
The engagement of monitoring and evaluation staff
in the planning stages of project is high
6.4 The engagement of monitoring and evaluation in the
execution stages of project is high
6.5 The engagement of monitoring and evaluation in the
evaluation stages of a project/program is high
evaluation in the closing stages of project is high

Page
XLVI

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy