C. Inception Report
C. Inception Report
C. Inception Report
Inception Report
This Section is addressed to: Evaluation teams developing the evaluation methodology for In-Depth Evaluations Project Managers judging the quality of the evaluation methodology for Independent Project Evaluations
An Inception Report summarizes the review of documentation (desk review) undertaken by an evaluator mandated by UNODC and specifies the evaluation methodology determining thereby the exact focus and scope of the exercise, including the evaluation questions, the sampling strategy and the data collection instruments. Consequently, the evaluator is expected to deliver an Inception Report as one of the key deliverables, which is shared with the Project Manager and the Independent Evaluation Unit for comments. The Inception Report provides an opportunity to elaborate on the evaluation methodology proposed in the ToR and its related issues at an early stage of the evaluation exercise. It also ensures that evaluation stakeholders have a common understanding of how the evaluation will be conducted. The evaluation team develops an Inception Report which contains the methodology used to answer the evaluation questions based on information derived from the ToR, the desk review and the evaluation team briefing. UNODC Inception Report Template, which is presented in the paragraph 1 below, provides the minimum requirements for the development of an evaluation methodology. The evaluation team could therefore go beyond the Inception Report Template depending on the needs of the evaluation (paragraph 2 below).
IV.
Sampling Strategy
V. Limitations to the Evaluation VI. Timetable The template for the Inception Report can be found in the Chapter IV Tools. Specific components of the Inception Report are elaborated below.
Data collection instruments include, but are not limited to, the following: desk review questionnaires surveys interviews focus group workshops field visits observations case study
TIP
It is recommended to pilot (i.e. test and correct them accordingly) the data collection instruments.
- The quality of the information obtained during an interview depends largely on the interviewers skills. - Cultural sensitiveness is a must for the interviewer, who must pay attention to the respondents reactions during the interview to avoid causing any offense.
b) Sampling Strategy
The evaluation team develop the sampling techniques that will be applied for the different data collection instruments. The evaluation team can use a combination of sampling techniques. Sampling techniques include but are not limited to: Random sampling: simple random sampling, random interval sampling, random-start and fixedinterval sampling, stratified random sampling, random cluster sampling, multistage random sampling Non-random sampling: purposeful sampling, snowball sampling, convenience sampling The evaluation team should identify the whole population (or stakeholder groups) for each data collection instrument (e.g. target group of the questionnaire, focus group or interview etc.) and determine a sample that (i) reflect as closely as possible the whole population, and (ii) is of significant size (the larger the sample, the less the variability). Indeed, the team critically discusses if the chosen sample size is statistically relevant and what sampling errors might occur. For this purpose the evaluation team could use a Sample Stakeholder Coverage Table which an example is given below. Please see Chapter IV Tools for the Sample Stakeholder Coverage Table.
a) Methodological approach
Special attention shall be paid to the production of a methodological approach that results in unbiased and objective evaluation findings. The choice of the evaluation approach depends on the context. Approaches are not necessarily mutually exclusive. Regardless of the methodological approach chosen, the same steps must be undertaken: defining evaluation questions, identifying indicators, collecting and analysing data, and reporting and using findings.
Approaches may include: Participatory: evaluation in which responsibilities for planning, implementing and reporting are shared with stakeholders, who may help define evaluation questions, collect and analyse data, and draft and review the report. Utilization-focused: evaluation judged by how useful it is and how it is actually used. Theory-based: evaluation that measures the extent to which the theory of a programme/project is adequate. Gender and human rights responsive: evaluation that assesses the effect of programmes/projects on gender equality, womens empowerment and human rights.
Control group: group in an experiment whose members are not exposed to a project/programme. Treatment group: group in an experiment whose members are exposed to a project/programme. To represent the evaluation design an evaluation matrix can be used by the evaluation team as below. Questions Type of Indicators question Target Baseline Design Data sources Sample Data collection instrument Data analysis
Com
The data collection and analysis methods should be sufficiently rigorous to assess the subject of the evaluation and ensure a complete, fair and unbiased assessment.
3. Evaluating Impact
This paragraph is addressed to evaluation teams tasked with assessing the impact of the subject evaluated.
In the event that impact evaluation was planned for at the design stage of a programme or project (with appropriate collection of baseline, creation of control and treatment groups, and creation of a monitoring system), the evaluation team will be able to adopt the following methodology to measure impact: use existing monitoring data and develop large scale sample surveys, in which treatment and control groups are compared before and after the projects implementation. In the event that impact measurement was not planned for at the design stage of a programme or project, the evaluation team will have to develop a methodology which considers data, budget and time constraints. The following methodology could be used: small-scale rapid assessments and participatory appraisals, where estimates of impact are obtained from combining group interviews, key informants, case studies and available secondary data.
To address data constraints the following solutions are proposed: - reconstruct baseline data - reconstruct comparison group To address budget constraints the following solutions are proposed: - simplify the evaluation design - clarify client information needs - reduce costs by reducing sample size - reduce costs of data collection and analysis To address time constraints the following solutions are proposed: - use existing documentary data - reduce sample size - undertake rapid data collection methods - hire more resource people - invest in data collection and analysis technology Challenges Inexistent or incomplete baselines Solutions Proposed Use other data information systems (secondary data) to reconstruct the baseline Use individual recall/retrospective interviewing techniques (respondents are asked to recall the situation at around the time the project began) Consult pre- and post- project evaluations, if any Use participatory group techniques to reconstruct the history of the community and to assess the changes that have been produced by the project Undertake interviews with key informants, preferably persons who know the target community, as well as other communities, and therefore, have a perspective on relative changes occurring over time1 Find matching comparison group, if any
Attribution problem
Analyse contextual factors affecting the results Provide an honest presentation of UNODC contribution alongside other agencies /institutions
RealWorld Evaluation: Working under budget, time, data, and political constraints, M. Bamberger, J. Rugh, L. Mabry