Serving The Community Through Successful Project Delivery: A User Guide To Post Implementation Reviews
Serving The Community Through Successful Project Delivery: A User Guide To Post Implementation Reviews
Serving The Community Through Successful Project Delivery: A User Guide To Post Implementation Reviews
A User Guide to
Post Implementation Reviews
February 2009
EFFICIENCY UNIT
V ision
Statement
To be the preferred consulting partner for all Government
bureaux and departments and to advance the delivery of
world-class public services to the people of Hong Kong.
M ission
Statement
To provide strategic and implementable solutions to
all our clients as they seek to deliver people-based
Government services. We do this by combining our
extensive understanding of policies, our specialised
knowledge and our broad contacts and linkages
throughout the Government and the private sector. In
doing this, we join our clients in contributing to the
advancement of the community while also providing a
fulfilling career for all members of our team.
We published the Government Business Case Guide in May 2008 to assist departments in
determining whether and how a project should be undertaken. This Guide closes the loop by
advocating the need for Post Implementation Reviews (PIRs) and setting out a framework for
doing so.
In managing our programmes and projects, it is essential that we assess whether the intended
results have been achieved; if not, why not, and identify opportunities for further improvement.
This can help improve our service delivery, ensure that public money is well spent and
demonstrate accountability. It will also be useful in responding to queries from oversight
authorities.
The PIR is a tool which aims to help us to achieve the above objectives. Much of the contents
of this Guide will appear to be no more than common sense to experienced officers. Indeed,
we believe all Bureaux/Departments constantly review the outcome of their programmes, but
making PIRs a conscious part of the project cycle is a good practice of modern day public sector
management.
This Guide is a first attempt to pull together information on how we might conduct PIRs for
non-works and non-information and communications technology projects. It provides a variety
of guidelines, tools and techniques to assist colleagues, but should not be followed slavishly.
Departments should always exercise common sense in deciding how much time and resources
should be devoted to a review. There will be occasions when the review will be no more than
a short report on file. On other occasions it will be a minor project in itself. The most important
thing is that a review is conducted.
The Efficiency Unit would warmly welcome any feedback on this Guide.
What is a PIR?
A PIR evaluates whether the project has achieved its intended objectives, reviews the performance
of project management activities and captures learning points for future improvements.
It is not necessary to conduct a full-scale PIR for every project under review. For example, a
simple minute may suffice for a small-scale project.
References 29
Appendices
1 Impact evaluation 30
This Guide should not be treated as a strait-jacket. It does not attempt to provide a one-size-fits-
all approach for reviewing all types of projects. Readers must judge what sections and actions
suggested in the Guide are applicable to their individual situations.
• Project directors/project sponsors – the Guide serves to raise awareness of the need to
conduct a PIR, and the requirement to prepare for a PIR during the project planning and
implementation stage. This will help ensure basic information such as project objectives,
and expected costs and benefits are clearly set out and well-documented
• PIR team members - to advise various steps, approaches and techniques that may be used
to conduct a PIR.
The generic term “department” is used throughout the Guide to describe the various levels of
Government organisational structures such as bureaux, departments and agencies.
The term “project” refers to both projects and programmes. A project focuses on delivering
specific outputs or products. It has a definite life time. However, a programme concerns more
with delivering outcomes and has a broad focus. It may not have a clear end date.
It should be emphasised that a PIR is not merely for measuring whether the project has delivered
its agreed outputs, but also to examine how well the outputs delivered were matched to the actual
needs that the project aimed to fulfil.
Example
However, most of the department’s customers are elderly persons and they are not
used to making appointments through IVRS. As a result, the project has made little
impact on improving the department’s overall appointment booking service.
A PIR is essentially an internal learning process rather than a process of finger-pointing, blame
allocation, etc.
It is perfectly acceptable for a PIR to conclude that a project failed. The important point is that
the department learns from the experience. It is better to identify a failure early and terminate the
project than to continue to repeat the mistake.
It is not necessary to conduct a full-scale PIR for all the projects selected. For instance, a simple
minute or paper outlining the assessment results and recommendations may suffice for small-scale
projects.
PIRs may not be a one-off exercise. A project may last for a long time and different outcomes may
be achieved as the project proceeds, and some may take years to realise. For these projects, it may
be useful to conduct PIRs periodically.
Example
The diagram below shows the possible timing for conducting PIRs.
A fresh independent team can evaluate the project with a fresh pair of eyes, but it may be more
costly and would likely take a longer time as it needs to understand the project from scratch.
Given that both the original project team and the independent team options have their own
merits and demerits, it may be more appropriate to establish a review team comprising a mix of
independent parties and original project team members with the independent parties taking a
leading role in the review.
• focus on identifying learning points for future improvements instead of fault finding.
Evaluating the processes rather than the performance of individual stakeholders is more
likely to gain the cooperation of those involved
• build in the PIR process at the project planning stage to ensure that baseline information
such as project objectives, estimated costs, performance measures, deliverables, milestones,
time frame and expected benefits are clearly set out. This would avoid the common pitfall
of having insufficient baseline information, especially the service levels before project
implementation, for comparison
• express the project outputs and outcomes in measurable terms as far as possible
• ideally, establish the cause-effect relationship between the project and its outputs/
outcomes. However, this will rarely be possible for outcomes which may be subject to the
influence of external factors. A possible way to overcome this issue is to conduct impact
evaluation. More information is at Appendix 1
• get sponsorship from the management to (i) acquire the necessary resources to conduct
the PIR, (ii) secure access to information, and (iii) get its commitment to consider / follow
up the recommendations.
Departments may have already committed themselves to conduct a PIR in the business case
report or funding paper, or there is an established mechanism to do so. In such cases, the review
objectives and scope of assessment may have already been defined. Nevertheless, it is advisable to
re-examine the objectives and scope of the PIR and, if necessary, refine them taking into account
the latest development of the project/community.
Preliminary research should be conducted to understand the project background, its objectives
and the key issues. Moreover, it is advisable to consult key stakeholders such as project sponsors,
project team and end users on whether there are any specific areas that need to be addressed in
the PIR.
A PIR may be conducted at critical milestones during project planning and implementation,
particularly for long-term and on-going projects. Given that the project outputs and outcomes may
not be fully delivered at the time of review, PIRs conducted at this stage usually focus on:
• assessing whether the project’s goals, objectives and assumptions are still relevant and
valid
A PIR may be conducted some time after the project closure. At this stage, all the outputs should
have been delivered and short-term outcomes realised. Such PIRs would generally focus on:
• evaluating the project achievements and their contribution to the business objectives of the
sponsoring department(s) or the government
• identifying improvement areas and drawing lessons for delivery of future projects
For long-term projects it may be worthwhile to continue to conduct PIRs periodically, say every
five years, to ascertain whether the project is still relevant and delivering services in the most cost-
effective manner.
While there may be specific areas that warrant review, there are common areas that a typical PIR
should cover. These areas include:
Project outcomes
• user feedback.
Project management
• time – whether the project is delivered on schedule. It covers both the overall project
schedule and the schedule for individual milestones
• cost – whether the project expenditure is within budget. Both non-recurrent and recurrent
cost should be examined
• quality – whether the project deliverables meet the required quality standards.
Lessons learnt
A PIR should identify the areas for improvement and draw lessons for future improvement.
Lessons can be broadly grouped into:
• Project-specific lessons for improving the subsequent phase(s) of the project being
reviewed
• Lessons that can be generalised for improving the delivery of future projects.
Example
A department has outsourced its cleansing services for the first time. A PIR was
conducted three months after contract commencement. The review team found that
the contractor frequently failed to respond promptly to workload fluctuations because
it could not deploy its staff flexibly under the input-based contract specifications. The
review team recommended the department use output-based specifications in its future
cleansing contracts and consider extending the approach to other contracts.
For projects which have not yet been completed, it may not be possible to evaluate all of the
above areas. The project plan should be examined to identify what are expected to be achieved/
delivered at the time of evaluation.
Given the wide range of review areas, the challenge in this stage is to prioritise and focus on key
issues to avoid information overflow and ensure that the PIR can be completed within the time
and resources constraints.
A key task of a PIR is to compare the actual project performance against the approved plan. For
some aspects of project performance, this is a straightforward and easy task, for example, those
relating to project management issues such as cost and time. However, for others, in particular
those relating to project outcomes, measuring the actual performance can be a challenge. For
example, to evaluate the outcome of an education project for disabled students aiming at helping
them achieve independence in daily life activities, one would need to determine how to measure
“independence in daily activities”.
Before developing the method for assessing the project outcomes, the review team should
determine the outcomes to be assessed. The review team should note that for some projects, it
may take time for the outcomes to develop or materialise.
Long-term outcome
Less smoking
Short-term outcome
Increased awareness of smoking hazard
Outputs
Television advertisement
Since a project may produce different outcomes as it proceeds, the review team needs to
determine what project outcomes are expected to realise at the time of review.
Project outcomes and their performance measuresNote may have already been defined in project
documents such as business case report, project brief and funding paper. Therefore, the first step
is to examine various project documents to see if suitable and adequate performance measures
are already in place.
Example
If the review team needs to develop new performance measures, several possible ways can be
considered:
• consult end users on what should be measured, i.e. the service attributes that are important
to an end user
• benchmark with other relevant local and overseas bodies to use standard values set by
specialised agencies, for example, World Health Organisation, World Bank, Organisation
for Economic Co-operation and Development, or use measures adopted by other
government agencies
Note
A performance measure is a quantifiable metric chosen to assess performance, e.g. customer satisfaction rate.
A performance measure should consist of three components: area of assessment, indicator(s) and
target value(s) as illustrated below:
Area of assessment
(e.g. effectiveness of an environmental protection project to reduce paper consumption )
Indicator(s)
(e.g. annual paper consumption)
Target value(s)
(e.g. reduction of the annual paper consumption by 10%)
Below is an illustrative example of using multiple performance measures to assess the quality of a
vocational training programme.
• direct– the measure should directly measure the outcome. For example, a programme to
enhance children’s academic performance might use examination or assessment results
rather than attendance rate of students as a performance measure
• adequate– the measure should be sufficient to assess the outcome. Project performance
can be reflected by a number of measures. Take tourism promotion as an example.
Measures can be the number of visitors and/or per capita spending. The review team
should consider whether the defined measure(s) is sufficient to assess the achievement.
Data on performance measures may be located in readily available sources, e.g. performance
reports, or need to be collected from other sources. The table below shows the possible sources
of data for evaluation of project outcomes and their common data collection methods:
Each data collection method described above has its pros and cons. The typical ones are listed in
Appendix 2.
The review team can consider combining several different methods to maximise the merits and
minimise the demerits of each data collection method. For example, to measure user satisfaction,
a questionnaire survey may be conducted after the attributes which the users consider crucial are
identified through focus groups.
Normally, the performance measures on “scope”, “time”, “cost” and “quality” are rather
straightforward as they are usually well-defined in various project documents and document
review is a common method used to collect these data. By comparing the planned data outlined
in the business case report / funding paper against the actual data captured, the performance of
project management can be objectively assessed.
The main task in this stage is to collect data on both the expected and the actual performance
and compare them.
• a significant increase in cost, time and complexity with no real gain to achieving the
original objectives of the PIR
• the data collection process becomes so tedious that the data owners may not wish to take
part in the process.
Before collecting data, the review team should consider whether it needs to:
• seek clearance and permission from appropriate authorities, in particular when dealing
with sensitive information such as personal data
• explain clearly the objectives of the PIR as well as the purposes of collecting the data to
the data owners in order to avoid any misunderstandings
• consider the appropriate timing for data collection to avoid the possibilities that the data
is influenced by other unrelated factors. For example, to measure public perception on a
particular government service during sensitive periods such as those right after the budget
speech or policy address may unduly influence the results.
Project outcomes
Project management
Scope
AREAS OF ASSESSMENT AREAS TO PROBE
Time
AREAS OF ASSESSMENT AREAS TO PROBE
• Full-live run / implementation date • How does the actual project schedule
compare with the approved schedule?
Cost
AREAS OF ASSESSMENT AREAS TO PROBE
• Project expenditure (non-recurrent and • Any deviation from the approved budget?
recurrent)
• Staff resources (non-recurrent and • Any deviation from the approved manpower
recurrent) plan?
Quality
AREAS OF ASSESSMENT AREAS TO PROBE
(Note: For projects which have not yet been completed, not all the areas described above need to
be evaluated.)
In conducting data analysis, the review team should pay special attention to the following issues:
The value of money changes as time goes by. A dollar today is not worth the same as five years
before due to factors such as inflation. It may not be appropriate to directly compare the project
costs and the financial benefits estimated in the business case with the actual figures as they were
calculated at different time periods. It is necessary to remove the effect of the time value of money
so that all values can be compared on an equal basis.
Discounting is a method used to compare costs incurred and financial benefits realised at
different time periods, in which a discount rate is applied to convert future costs or benefits to
the equivalent costs or benefits in today’s values (or present value). Readers can refer to the
Efficiency Unit’s publication – A Government Business Case GuideNote for further information on
the discounting method.
Note
The guide is available at http://www.eu.gov.hk/english/publication/pub_bp/files/Business_Case_Guide.pdf
The business case and project plan were developed based on past assumptions and predictions
about the future. As the project proceeds, the project parameters such as project schedule
or even the project scope may have changed to meet the actual requirements at the time of
implementation. The review team must be fully aware of these changes (approved or not)
and understand the rationale behind. The approved changes should be used as the basis for
comparison instead of the original parameters stated in the business case report / project plan.
Attribution issues
The review team should examine whether the actual outcome is attributed to the project itself or
other factors. A project outcome may be influenced by factors other than the project, and simply
looking at the changes before and after the project may not be sufficient to make a credible
evaluation.
Example
The review team should take into account any changes in the external environment during project
implementation and assess whether they have had any significant impact on the project.
The following are some common methods for identifying issues and lessons learnt:
The review team has compared what actually happened against what was planned in the previous
stage. The comparison would help identify what was done well and what was done badly. This
forms the basis for further investigation to identify the underlying issues (or critical success factors)
and to develop recommendations.
• interrelationship diagram
• 5 whys.
Document review
During project implementation, the project team may have used the following logs to facilitate
their monitoring and control of the project:
• daily logs – to record the daily activities and work done by individual staff
• issue logs – to record the status of the resolution of all the issues raised by the stakeholders
The review team is advised to review these project logs to identify whether there are any issues
that are worth further examination.
Focus groups and interviews allow the review team to have a more in-depth discussion with
stakeholders to identify the issues and solutions. The discussion would normally focus on what
went right, what went wrong, and what can be improved. The hallmark of focus groups is the
explicit use of the group interactions to generate ideas and insights. Interviews, on the other
hand, are particularly appropriate in situations involving complex subject matters, high-status
interviewees, and sensitive subject matters.
To begin, the review team can consider using the 3+3 survey, which asks the stakeholders to
list three positive aspects and three negative aspects of the project. This helps the review team
identify efficiently the areas which the stakeholders consider important.
The issues to be discussed with different stakeholders should be pitched at appropriate levels.
Below are some suggested areas for discussion:
• how well has the project met the business case objectives?
• were there any problems / difficulties in project implementation? How were they resolved?
• are the project assumptions made in the business case still sound and valid? If not,
how were they addressed?
• were there any unintended outcomes (both positive and negative) arising from the project?
• what are the reasons for the successes / failures? What should be done differently next time?
• are there any factors outside the control of the project team that have affected the
project outcomes?
• are the end users satisfied with the services provided? Why / why not?
On-site observation
On-site observation is a method by which the review team can gather first hand data by observing
the services provided at the point of delivery. It enables the review team to develop a more
holistic perspective, i.e. an understanding of the context within which the services operate. It also
allows the review team to learn more about issues that some stakeholders may be unaware of or
that they are unwilling or unable to discuss openly. In some circumstances, on-site observation
may be undertaken in the form of “mystery customer” so as to obtain the first-hand information.
Benchmarking
• develop recommendations to rectify the problems identified, to realise benefits not fully
met or to reap extra benefits. These may involve changes in project strategies, project plan,
expansion / reduction of the project scope or even termination of the project
• consult relevant stakeholders and the potential users on the practicability of the
recommendations
• generalise the lessons learnt for wider application to improve future projects. This may
involve proposals to change policies and procedures.
The format and content of the PIR report need to be carefully considered to ensure that the report
shows a good range of useful information in a concise and meaningful way. All key elements,
costs and benefits (financial, economic and social) should be addressed in the report.
• background of the project – what are the key drivers for implementing the projects? What
are the project objectives?
• implications for the current project and future projects - what are the lessons learnt from
this project and how they should be addressed in future projects?
Every effort should be made to ensure that the lessons identified are communicated and learnt so
that the department can ride on its success and avoid making the same mistakes.
Example
Note: While some of the references quoted below focus on IT projects, the approaches used can
also be applied to non-IT projects.
New South Wales Treasury, Australia, Post Implementation Review Guideline 2004
http://www.treasury.nsw.gov.au/__data/assets/pdf_file/0008/5102/post_implementation_review.pdf
U.S. Department of Education, Federal Student Aid, Post Implementation Review Process
Description version 2.0
http://www.federalstudentaid.ed.gov/docs/ciolibrary/FSASA-2PIRProcess_v2.0.pdf
The National Science Foundation, U.S., User-Friendly Handbook for Project Evaluation (2002)
http://www.nsf.gov/pubs/2002/nsf02057/start.htm
APPENDIX 1
Impact Evaluation
Impact evaluation is defined by the World Bank as an assessment of the impact of a project on
final outcomes. It assesses the changes that can be attributed to a particular project.
A key element in impact evaluation is identifying the “counterfactual”: What would have
happened had the project not taken place? The most frequently used methods of identifying the
counterfactual are:
Quasi-experiments
Like RCTs, quasi-experiments assess the differences that result from a project and the result that
would have happened without the project. However, the control group is not randomly assigned.
Instead, it is designed on the basis of how to minimise any differences between the two groups (e.g.
similar household incomes, social/education backgrounds). Use of this method may increase the
risk of misleading results because of the difficulty in eliminating bias in the selection of control
group.
The benefits and challenges of impact evaluation are well described by the World Bank
Independent Evaluation Group (www.worldbank.org/ieg/ecd/conduct_qual_impact_eval.html).
APPENDIX 2
Typical data collection methods
APPENDIX 3
Techniques for identifying underlying issues
What is it?
A cause and effect diagram is a tool used to identify the real causes of a given effect (or
outcome). A typical cause-and-effect diagram shows the effect at the right and its main causes (or
factors) along a horizontal axis. These main causes are in turn effects that have their subcauses,
and so on, down many levels.
The effect can be positive (objectives) or negative (problems). Place it in a box on the
right side of the diagram.
The problem,
objective,
goal, etc
(2) List the major categories of the causes that influence the effect being studied
The “4Ms” (methods, manpower, materials, machinery) or the “4Ps” (policies, procedures,
people, plant) are commonly used as a starting point.
The problem,
objective,
goal, etc
Within each major category, identify the possible causes and subcauses.
Example
A cause and effect diagram for high turnover rate in a call centre
Unattractive remuneration
Low job satisfaction
package
Interrelationship diagram
What is it?
An interrelationship diagram is used to study the links and relationships between factors and
identify which factor is the major cause. It is useful when there are a number of factors and you
are unsure as to which factors have the most effect on the others.
As a process, an interrelationship diagram is often used after a number of factors have been
identified through other tools such as brainstorming and cause-effect diagram.
Example
An outsourcing project cannot be completed on schedule. Possible factors were:
Inadequate contract
managers
Ineffective contract
management
(2) Identify cause-and-effect relationship and draw arrows to indicate directions of influence
Using any factor as the starting point, systematically consider the relationship between
each one by asking: Is there a relationship? If yes, then determine which one is the cause,
and which one is the effect. For each relationship pair, draw an arrow from the cause to
the effect. (Note: Never draw two-headed arrows. If they have influence on each other,
determine which one has stronger influence)
Example
Using the factor “inadequate contract managers” as the starting point, determine
its relationships with other factors one by one and draw arrows to show the
direction of influence
Inadequate contract
managers
Ineffective contract
management
(3) Repeat step (2) in a clockwise direction until all the factors have been considered and
arrows attached to show their relationships
Example
The interrelationships among all the factors are identified as follows:
Inadequate contract
managers
Ineffective contract
management
For each factor, clearly record the number of arrows going in and going out.
Example
From the above example, the main cause for the delay is unclear project scope since
it has the most outgoing arrows. Poor contractor performance is in fact the outcome
since it has the most incoming arrows. Therefore, the causal relationship should be:
Unrealistic project
schedule
Unclear Poor Delay in
project contractor project
scope performance schedule
Inadequate contract
managers
Ineffective contract
management
5 Whys
What is it?
The 5 Whys is a simple problem-solving technique that helps identify the root cause of a problem.
It involves repeatedly asking the question "why" with a view to peeling away the layers of
symptoms which can lead to the root cause of a problem. Although this technique is called "5
Whys", it may need to ask the question fewer or more than five times before the root cause to a
problem can be identified.
• Ask why the problem happens and write down the answer
• If the answer provided does not point to the root cause of the problem, keep asking “why”
and write down the answer until the root cause is identified.
Example
APPENDIX 4
PIR report template
Contents
1. Executive Summary
1.1 Overall
assessment
Provide an overview of whether the project is successful or not in terms of the extent of
project achievements and the performance of project management.
2. Background
State the key drivers for implementing the project under review, its objectives, expected
outcomes and deliverables, and the key stakeholders involved.
State what the PIR aimed to accomplish and the project areas examined in the review.
Name of the team State the role in the State the relationship with the
member review team, e.g. review project under review, for example,
team leader, review team original project team member,
member, etc. independent third party, etc.
Describe the review methodology, including the performance measures used to evaluate
project performance. Describe the data collection approach.
The review team can use the following table to summarise the review methodology.
State the project Short description of the State the data collection method
outcomes to be performance measures
measured used
3.1 P
roject outcomes
Describe whether the expected project outcomes are achieved or not and to what extent.
State whether there are any unintended outcomes (positive and negative).
Project objectives State the level of Explain the reasons for deviations
achievement
Scope State the State the actual outputs Explain the reasons for
deliverables delivered deviations
expected
State the lessons learnt which may be used to improve future project delivery;
distinguish between project-specific lessons and general lessons.
4.2 Recommendations
Describe the measures to rectify the problems identified, to realise benefits not fully
delivered, to reap extra benefits, etc. Explain whether the existing policies and practices
should be changed.
Describe the action plan to implement the recommendations and disseminate the review
findings.
Efficiency Unit
13/F., West Wing, Central Government Offices
11 Ice House Street
Central
Hong Kong
Email: euwm@eu.gov.hk
Website: www.eu.gov.hk