Serving The Community Through Successful Project Delivery: A User Guide To Post Implementation Reviews

Download as pdf or txt
Download as pdf or txt
You are on page 1of 46

Serving the Community

Through Successful Project Delivery

A User Guide to
Post Implementation Reviews
February 2009
EFFICIENCY UNIT

VISION & MISSION


V ision
Statement
To be the preferred consulting partner for all Government
bureaux and departments and to advance the delivery of
world-class public services to the people of Hong Kong.

M ission
Statement
To provide strategic and implementable solutions to
all our clients as they seek to deliver people-based
Government services. We do this by combining our
extensive understanding of policies, our specialised
knowledge and our broad contacts and linkages
throughout the Government and the private sector. In
doing this, we join our clients in contributing to the
advancement of the community while also providing a
fulfilling career for all members of our team.

A USER GUIDE TO POST IMPLEMENTATION REVIEWS 1


FOREWORD

We published the Government Business Case Guide in May 2008 to assist departments in
determining whether and how a project should be undertaken. This Guide closes the loop by
advocating the need for Post Implementation Reviews (PIRs) and setting out a framework for
doing so.

In managing our programmes and projects, it is essential that we assess whether the intended
results have been achieved; if not, why not, and identify opportunities for further improvement.
This can help improve our service delivery, ensure that public money is well spent and
demonstrate accountability. It will also be useful in responding to queries from oversight
authorities.

The PIR is a tool which aims to help us to achieve the above objectives. Much of the contents
of this Guide will appear to be no more than common sense to experienced officers. Indeed,
we believe all Bureaux/Departments constantly review the outcome of their programmes, but
making PIRs a conscious part of the project cycle is a good practice of modern day public sector
management.

This Guide is a first attempt to pull together information on how we might conduct PIRs for
non-works and non-information and communications technology projects. It provides a variety
of guidelines, tools and techniques to assist colleagues, but should not be followed slavishly.
Departments should always exercise common sense in deciding how much time and resources
should be devoted to a review. There will be occasions when the review will be no more than
a short report on file. On other occasions it will be a minor project in itself. The most important
thing is that a review is conducted.

The Efficiency Unit would warmly welcome any feedback on this Guide.

Head, Efficiency Unit

2 A USER GUIDE TO POST IMPLEMENTATION REVIEWS


EXECUTIVE SUMMARY
Purpose of the Guide
The purpose of this Post Implementation Review Guide (the Guide) is to inform civil service
colleagues what a Post Implementation Review (PIR) is, and when and how to conduct one. It
aims to provide a general framework and guidelines to assist departments in conducting PIRs on
non-works and non-information and communications technology projects.

What is a PIR?
A PIR evaluates whether the project has achieved its intended objectives, reviews the performance
of project management activities and captures learning points for future improvements.

It is a learning process and should not be used for blame allocation.

Why conduct a PIR?


The Government has a responsibility to make the best use of public resources to deliver services
to the community, and to demonstrate accountability. A PIR helps departments to achieve these
purposes.

What projects should be reviewed?


In general, the cost of conducting a PIR should not outweigh its benefits. The importance, nature,
purpose and outcome of projects are the common aspects for selection of projects for review.

It is not necessary to conduct a full-scale PIR for every project under review. For example, a
simple minute may suffice for a small-scale project.

When should a PIR be conducted?


Depending on the nature, complexity and duration of the project, PIRs may be conducted
periodically during the implementation stage to ascertain whether the project is proceeding
on the right track, and after project closure to assess the short-term and long-term outcomes.
Nevertheless, a PIR should be planned in advance. In particular, the mechanism to collect baseline
information, e.g. the service levels before project implementation, should be established at the
project planning stage.

A USER GUIDE TO POST IMPLEMENTATION REVIEWS 3


EXECUTIVE SUMMARY

How to conduct a PIR?


The Guide presents a four-stage model to conduct a PIR:

STAGE TASKS KEY ISSUES


Define review • Conduct preliminary research • Are the pre-defined review
on the project under review objectives and assessment
objectives and scope
• Identify special areas that need scope (if any) still valid?
of assessment to be addressed • Are there any issues that need
• Finalise review objectives and to be addressed?
scope of assessment • Can the review be completed
within the time and resources
constraints?
Determine review • Identify project outcomes to be • What project outcomes should
assessed be assessed?
methodology
• Develop assessment method • Is the pre-project data
• Design data collection available?
approach • Are there any established
performance measures? Are
they relevant and adequate?
• What are the most effective
ways to collect the information
required?
Collect and analyse • Collect data • Can we directly compare the
• Compare the actual expected and actual data on a
data
performance against the “like with like” basis?
expected performance • Are there any changes in
project parameters and
assumptions?
• Are there any external factors
that have affected the project
outcomes?
Identify issues and • Identify lessons learnt • What went well and what went
• Identify the root causes wrong?
lessons learnt and
for under-performance, if • Do the project outputs meet
reporting applicable the actual needs?
• Develop recommendations to • What should be done
improve the current project differently to improve the
and future ones delivery of the current and
• Document and disseminate the future projects?
review findings

4 A USER GUIDE TO POST IMPLEMENTATION REVIEWS


CONTENTS
1 Introduction 6

1.1 Purpose of this Guide 6

1.2 What is a PIR? 7

1.3 Why conduct a PIR? 8

1.4 What projects should be reviewed? 8

1.5 When should a PIR be conducted? 9

1.6 How long does a PIR normally take? 10


1.7 Who should conduct the review? 10


1.8 Pre-requisites for a successful PIR 11


2 Define review objectives and scope of assessment 12


2.1 Set review objectives 12


2.2 Define scope of assessment 13


3 Develop review methodology 15


3.1 Review project outcomes 15


3.2 Assess project management performance 19


4 Collect and analyse data 20


4.1 Collect data 20


4.2 Analyse data collected 20


5 Identify issues and lessons learnt and reporting 24


5.1 Identify issues and lessons learnt 24


5.2 Develop recommendations 27


5.3 Report findings 27


5.4 Implement changes and share findings 28


References 29

Appendices
1 Impact evaluation 30

2 Typical data collection methods 31


3 Techniques for identifying underlying issues 32

4 PIR report template 39


A USER GUIDE TO POST IMPLEMENTATION REVIEWS 5


1 INTRODUCTION

1.1 Purpose of this Guide


The purpose of this Post Implementation Review Guide (the Guide) is to inform civil service
colleagues what a Post Implementation Review (PIR) is, and when and how to conduct one. It
aims to provide a general framework and guidelines to assist departments in conducting PIRs.

It mainly focuses on non-works and non-information and communications technology projects. It


is relevant to projects delivered by both in-house staff and by the private sector.

This Guide should not be treated as a strait-jacket. It does not attempt to provide a one-size-fits-
all approach for reviewing all types of projects. Readers must judge what sections and actions
suggested in the Guide are applicable to their individual situations.

The target audience of the Guide includes:

• Project directors/project sponsors – the Guide serves to raise awareness of the need to
conduct a PIR, and the requirement to prepare for a PIR during the project planning and
implementation stage. This will help ensure basic information such as project objectives,
and expected costs and benefits are clearly set out and well-documented

• PIR team members - to advise various steps, approaches and techniques that may be used
to conduct a PIR.

The generic term “department” is used throughout the Guide to describe the various levels of
Government organisational structures such as bureaux, departments and agencies.

The term “project” refers to both projects and programmes. A project focuses on delivering
specific outputs or products. It has a definite life time. However, a programme concerns more
with delivering outcomes and has a broad focus. It may not have a clear end date.

6 A USER GUIDE TO POST IMPLEMENTATION REVIEWS


INTRODUCTION

1.2. What is a PIR?


Various terms are used to address learning from project experience and reviewing project
performance: post mortem review, project closure review, post project review, etc. In this Guide,
we use the term PIR to describe the review process to evaluate project achievements in both
qualitative and quantitative terms. The main purposes of a PIR are:
• to ascertain whether the project has achieved its intended objectives
• to review the performance of project management activities
• to capture learning points for future improvements.

It should be emphasised that a PIR is not merely for measuring whether the project has delivered
its agreed outputs, but also to examine how well the outputs delivered were matched to the actual
needs that the project aimed to fulfil.

Example

A department has introduced a telephone booking service to improve its appointment


booking service. A PIR was conducted to evaluate the project achievements. The
review team found that the telephone booking service was delivered on schedule and
within budget. Instead of making an appointment in person, customers can make use
of an Interactive Voice Response System (IVRS) and complete the booking within two
minutes. The project was considered a success on its own.

However, most of the department’s customers are elderly persons and they are not
used to making appointments through IVRS. As a result, the project has made little
impact on improving the department’s overall appointment booking service.

A PIR helps answer questions such as:


• whether the project was successful or not and for what reasons?
• to what extent has the project achieved its intended outcomes?
• to what extent has the project delivered its agreed outputs?
• what may be done to improve the current or future projects?

A PIR is essentially an internal learning process rather than a process of finger-pointing, blame
allocation, etc.

A USER GUIDE TO POST IMPLEMENTATION REVIEWS 7


INTRODUCTION

1.3 Why conduct a PIR?


The Government has a responsibility to make the best use of public resources to deliver services
to the community, and to demonstrate accountability. Specifically, a PIR can help:
• identify measures to improve the project being reviewed
• assess the contribution of the project to the department’s business objectives
• provide an effective means to demonstrate accountability
• evaluate whether the intended project outcomes have been achieved
• improve benefits realisation and project implementation
• improve the delivery and outputs of future projects by learning from the past.

1.4 What projects should be reviewed?


The PIR process requires time and effort, especially for a full-scale review. Careful consideration
should be given to selection of projects for review to ensure that the costs of conducting a
PIR would not outweigh its benefits. The selection criteria are specific to the projects being
considered and are different depending on the purposes of the review. Below are some suggested
criteria:

• Importance: in terms of costs, resources and impact


It is worthwhile to review projects which involve high costs and resources and/or have
high impact.

• Purpose: pilot/exemplary projects and joined-up projects


A PIR can be conducted to determine whether new approaches or service models should
be continued, modified or adopted for wider application.

• Nature: on-going versus one-off projects


A one-off project is less likely to be replicated in future. PIRs for this kind of projects may
have a lower reference value.

• Outcome: successful and unsuccessful projects


It is equally important to identify best practices/lessons from successful and unsuccessful
projects.

8 A USER GUIDE TO POST IMPLEMENTATION REVIEWS


INTRODUCTION

It is perfectly acceptable for a PIR to conclude that a project failed. The important point is that
the department learns from the experience. It is better to identify a failure early and terminate the
project than to continue to repeat the mistake.

It is not necessary to conduct a full-scale PIR for all the projects selected. For instance, a simple
minute or paper outlining the assessment results and recommendations may suffice for small-scale
projects.

1.5 When should a PIR be conducted?


A PIR can be conducted after project closure to assess the full impact of the project and identify
improvement opportunities for future projects. For long duration projects, it can be conducted
at an appropriate time after completion of critical milestones or after the outputs or benefits are
expected to materialise. It can also be conducted when major issues are encountered to see if
there is a need to modify the original project plan. A PIR conducted too soon may not be able to
assess the full impact, while a review conducted too late may fail to influence the delivery and
outcomes of current project and/or future projects.

PIRs may not be a one-off exercise. A project may last for a long time and different outcomes may
be achieved as the project proceeds, and some may take years to realise. For these projects, it may
be useful to conduct PIRs periodically.

Example

For an initiative on prevention of domestic violence, the short-term outcome may be


increased awareness of domestic violence while the long-term one may be reduced
domestic violence. A PIR can be conducted after the launch of a publicity campaign to
assess the public awareness. When the reduction of domestic violence is expected to
be more apparent, another PIR can be conducted to examine the change in the extent.

A USER GUIDE TO POST IMPLEMENTATION REVIEWS 9


INTRODUCTION

The diagram below shows the possible timing for conducting PIRs.

1.6 How long does a PIR normally take?


The time required for a PIR depends very much on the complexity of the project under review,
the scope of the PIR and the availability of data. It may range from weeks to months. In an ideal
case, if the project and the PIR scope are not complex and all the required information is readily
available, a PIR may be completed within two to four weeks.

1.7 Who should conduct the review?


The main consideration here is whether the review should be conducted by the original project
team or by an independent third party (i.e. one that has not been involved in the project). Using
the original project team to conduct the PIR has the advantage that it can be conducted more
efficiently as the project team is familiar with the project. However, the disadvantages are that
issues which have been overlooked by the project team may probably be overlooked again in the
PIR. Besides, the objectivity of the PIR may be challenged as the original project team may be
defensive and biased.

10 A USER GUIDE TO POST IMPLEMENTATION REVIEWS


INTRODUCTION

A fresh independent team can evaluate the project with a fresh pair of eyes, but it may be more
costly and would likely take a longer time as it needs to understand the project from scratch.

Given that both the original project team and the independent team options have their own
merits and demerits, it may be more appropriate to establish a review team comprising a mix of
independent parties and original project team members with the independent parties taking a
leading role in the review.

1.8 Pre-requisites for a successful PIR


When planning and conducting a PIR, departments should:

• focus on identifying learning points for future improvements instead of fault finding.
Evaluating the processes rather than the performance of individual stakeholders is more
likely to gain the cooperation of those involved

• build in the PIR process at the project planning stage to ensure that baseline information
such as project objectives, estimated costs, performance measures, deliverables, milestones,
time frame and expected benefits are clearly set out. This would avoid the common pitfall
of having insufficient baseline information, especially the service levels before project
implementation, for comparison

• clearly document the approved changes to project assumptions and parameters

• express the project outputs and outcomes in measurable terms as far as possible

• ideally, establish the cause-effect relationship between the project and its outputs/
outcomes. However, this will rarely be possible for outcomes which may be subject to the
influence of external factors. A possible way to overcome this issue is to conduct impact
evaluation. More information is at Appendix 1

• get sponsorship from the management to (i) acquire the necessary resources to conduct
the PIR, (ii) secure access to information, and (iii) get its commitment to consider / follow
up the recommendations.

A USER GUIDE TO POST IMPLEMENTATION REVIEWS 11


2 DEFINE REVIEW OBJECTIVES AND
SCOPE OF ASSESSMENT
Upon deciding that a PIR should be conducted, senior management should set out the review
objectives and scope of assessment. A review team should then be appointed to carry out the
review.

Departments may have already committed themselves to conduct a PIR in the business case
report or funding paper, or there is an established mechanism to do so. In such cases, the review
objectives and scope of assessment may have already been defined. Nevertheless, it is advisable to
re-examine the objectives and scope of the PIR and, if necessary, refine them taking into account
the latest development of the project/community.

Preliminary research should be conducted to understand the project background, its objectives
and the key issues. Moreover, it is advisable to consult key stakeholders such as project sponsors,
project team and end users on whether there are any specific areas that need to be addressed in
the PIR.

2.1 Set review objectives


In general, a PIR aims to:

• review the achievement of project outcomes

• review the performance of project management

• draw lessons for future improvement

The focus of a PIR may vary depending on when it is conducted.

During project implementation

A PIR may be conducted at critical milestones during project planning and implementation,
particularly for long-term and on-going projects. Given that the project outputs and outcomes may
not be fully delivered at the time of review, PIRs conducted at this stage usually focus on:

• ascertaining whether the project is proceeding as planned

12 A USER GUIDE TO POST IMPLEMENTATION REVIEWS


DEFINE REVIEW OBJECTIVES AND SCOPE OF ASSESSMENT

• assessing whether the project’s goals, objectives and assumptions are still relevant and
valid

• determining whether there is a need to modify the project plan or strategies.

After project closure

A PIR may be conducted some time after the project closure. At this stage, all the outputs should
have been delivered and short-term outcomes realised. Such PIRs would generally focus on:

• evaluating the project achievements and their contribution to the business objectives of the
sponsoring department(s) or the government

• identifying improvement areas and drawing lessons for delivery of future projects

• ascertaining the value for money issues, if applicable.

For long-term projects it may be worthwhile to continue to conduct PIRs periodically, say every
five years, to ascertain whether the project is still relevant and delivering services in the most cost-
effective manner.

2.2 Define scope of assessment


Depending on the complexity of the project, a PIR can cover the whole project or a specific
part(s) of the project. For example, a cross-departmental call centre project may involve setting
up of the call centre, recruitment and training of call centre agents, procurement of equipment,
establishment of service level agreements with participating departments and development of the
knowledge base.

While there may be specific areas that warrant review, there are common areas that a typical PIR
should cover. These areas include:

Project outcomes

• achievement of project objectives

• realisation of projects benefits (financial and non-financial)

• unintended outcomes (positive and negative)

• user feedback.

A USER GUIDE TO POST IMPLEMENTATION REVIEWS 13


DEFINE REVIEW OBJECTIVES AND SCOPE OF ASSESSMENT

Project management

• scope – whether the project has produced the agreed deliverables

• time – whether the project is delivered on schedule. It covers both the overall project
schedule and the schedule for individual milestones

• cost – whether the project expenditure is within budget. Both non-recurrent and recurrent
cost should be examined

• quality – whether the project deliverables meet the required quality standards.

Lessons learnt

A PIR should identify the areas for improvement and draw lessons for future improvement.
Lessons can be broadly grouped into:

• Project-specific lessons for improving the subsequent phase(s) of the project being
reviewed

• Lessons that can be generalised for improving the delivery of future projects.

Example

A department has outsourced its cleansing services for the first time. A PIR was
conducted three months after contract commencement. The review team found that
the contractor frequently failed to respond promptly to workload fluctuations because
it could not deploy its staff flexibly under the input-based contract specifications. The
review team recommended the department use output-based specifications in its future
cleansing contracts and consider extending the approach to other contracts.

For projects which have not yet been completed, it may not be possible to evaluate all of the
above areas. The project plan should be examined to identify what are expected to be achieved/
delivered at the time of evaluation.

Given the wide range of review areas, the challenge in this stage is to prioritise and focus on key
issues to avoid information overflow and ensure that the PIR can be completed within the time
and resources constraints.

14 A USER GUIDE TO POST IMPLEMENTATION REVIEWS


3 DEVELOP REVIEW METHODOLOGY

A key task of a PIR is to compare the actual project performance against the approved plan. For
some aspects of project performance, this is a straightforward and easy task, for example, those
relating to project management issues such as cost and time. However, for others, in particular
those relating to project outcomes, measuring the actual performance can be a challenge. For
example, to evaluate the outcome of an education project for disabled students aiming at helping
them achieve independence in daily life activities, one would need to determine how to measure
“independence in daily activities”.

3.1 Review project outcomes

3.1.1 Identify project outcomes

Before developing the method for assessing the project outcomes, the review team should
determine the outcomes to be assessed. The review team should note that for some projects, it
may take time for the outcomes to develop or materialise.

Below is an example of the possible outcomes at different stages of an anti-smoking campaign,


which uses television advertisement to promote anti-smoking.

Long-term outcome
Less smoking

Short-term outcome
Increased awareness of smoking hazard

Outputs
Television advertisement

Since a project may produce different outcomes as it proceeds, the review team needs to
determine what project outcomes are expected to realise at the time of review.

A USER GUIDE TO POST IMPLEMENTATION REVIEWS 15


DEVELOP REVIEW METHODOLOGY

3.1.2 Evaluate project outcomes

The key steps in evaluating project outcomes include:

Review existing documents

Project outcomes and their performance measuresNote may have already been defined in project
documents such as business case report, project brief and funding paper. Therefore, the first step
is to examine various project documents to see if suitable and adequate performance measures
are already in place.

Example

In a project to outsource property management services, the expected outcomes


were an improvement in the provision of security and cleansing services and the
maintenance of the facilities. The review team examined the contract, and found that
performance measures for these outcomes were already defined in the contract. They
included end user satisfaction, the inspection results of the procuring department, the
fault incidence rate and the complaint statistics.

Develop performance measures

If the review team needs to develop new performance measures, several possible ways can be
considered:

• consult end users on what should be measured, i.e. the service attributes that are important
to an end user

• consider successful cases of similar projects to learn from their experience

• benchmark with other relevant local and overseas bodies to use standard values set by
specialised agencies, for example, World Health Organisation, World Bank, Organisation
for Economic Co-operation and Development, or use measures adopted by other
government agencies

• discuss with stakeholders the basis for evaluation.

Note
A performance measure is a quantifiable metric chosen to assess performance, e.g. customer satisfaction rate.

16 A USER GUIDE TO POST IMPLEMENTATION REVIEWS


DEVELOP REVIEW METHODOLOGY

A performance measure should consist of three components: area of assessment, indicator(s) and
target value(s) as illustrated below:

Area of assessment
(e.g. effectiveness of an environmental protection project to reduce paper consumption )

Indicator(s)
(e.g. annual paper consumption)

Target value(s)
(e.g. reduction of the annual paper consumption by 10%)

Below is an illustrative example of using multiple performance measures to assess the quality of a
vocational training programme.

Areas of assessment Indicator(s) Target value(s)

Employment rate > 93%


Employability
Quality of Job nature > 75% are full-time jobs
vocational
training Satisfaction of > 90% are satisfied
graduate
Satisfaction Satisfaction of > 85% are satisfied
employer

A good performance measure should have the following characteristics:

• direct– the measure should directly measure the outcome. For example, a programme to
enhance children’s academic performance might use examination or assessment results
rather than attendance rate of students as a performance measure

• measurable– measurable indicators should be used whenever possible in order to provide


an objective assessment. For example, website performance can be translated into the
number of hits, number of documents downloaded or time spent on the site

A USER GUIDE TO POST IMPLEMENTATION REVIEWS 17


DEVELOP REVIEW METHODOLOGY

• specific– a target should be set in either quantitative or qualitative terms in order to


provide a clear indication of the level of performance expected. For example, a programme
to improve health conditions can be linked with specific targets such as reductions in the
annual number of infectious diseases by a certain percentage. Many targets will change
over time, becoming more demanding as the project progresses

• adequate– the measure should be sufficient to assess the outcome. Project performance
can be reflected by a number of measures. Take tourism promotion as an example.
Measures can be the number of visitors and/or per capita spending. The review team
should consider whether the defined measure(s) is sufficient to assess the achievement.

Design data collection approach

Data on performance measures may be located in readily available sources, e.g. performance
reports, or need to be collected from other sources. The table below shows the possible sources
of data for evaluation of project outcomes and their common data collection methods:

DATA REQUIRED SOURCE OF DATA DATA COLLECTION METHOD


• Project objectives • Business case report • Document review
• Financial and non-financial • Project brief / funding paper • Focus group / interview
benefits • Inception report • Surveys
• Short term and long term • Project initiation document
outcomes • Computer records from
Departmental Management
Information System, Human
Resources Management
Information System, etc
• Case files
• Performance reports
• Audit reports
• Ombudsman reports
• User feedback • End users • Questionnaire survey (face-to­
• Complaints / suggestions face, telephone, on-line, paper)
• Focus group / interview
• Feedback forms
• “Mystery customers”

Each data collection method described above has its pros and cons. The typical ones are listed in
Appendix 2.

18 A USER GUIDE TO POST IMPLEMENTATION REVIEWS


DEVELOP REVIEW METHODOLOGY

The review team can consider combining several different methods to maximise the merits and
minimise the demerits of each data collection method. For example, to measure user satisfaction,
a questionnaire survey may be conducted after the attributes which the users consider crucial are
identified through focus groups.

3.2 Assess project management performance

Normally, the performance measures on “scope”, “time”, “cost” and “quality” are rather
straightforward as they are usually well-defined in various project documents and document
review is a common method used to collect these data. By comparing the planned data outlined
in the business case report / funding paper against the actual data captured, the performance of
project management can be objectively assessed.

A USER GUIDE TO POST IMPLEMENTATION REVIEWS 19


4 COLLECT AND ANALYSE DATA

The main task in this stage is to collect data on both the expected and the actual performance
and compare them.

4.1 Collect data


The review team may be tempted to collect more data than actually required. This temptation
should be avoided, as it may lead to:

• a significant increase in cost, time and complexity with no real gain to achieving the
original objectives of the PIR

• the data collection process becomes so tedious that the data owners may not wish to take
part in the process.

Before collecting data, the review team should consider whether it needs to:

• seek clearance and permission from appropriate authorities, in particular when dealing
with sensitive information such as personal data

• explain clearly the objectives of the PIR as well as the purposes of collecting the data to
the data owners in order to avoid any misunderstandings

• consider the appropriate timing for data collection to avoid the possibilities that the data
is influenced by other unrelated factors. For example, to measure public perception on a
particular government service during sensitive periods such as those right after the budget
speech or policy address may unduly influence the results.

4.2 Analyse data collected


The review team should analyse the data collected based on the established performance
measures to assess the achievement of project outcomes and performance on project management.
Typical areas to probe in the analysis are:

20 A USER GUIDE TO POST IMPLEMENTATION REVIEWS


COLLECT AND ANALYSE DATA

Project outcomes

AREAS OF ASSESSMENT AREAS TO PROBE


• Project objectives • Are there any changes / improvements
brought by the project?
• Did the degree of changes / improvements
meet the stated targets?
• Tangible and intangible benefits • Have the expected benefits (e.g. savings,
improvement in service quality) been realised
and to what extent?
• Are there any unexpected benefits?
• User satisfaction • Are the end users satisfied with the services
provided or the changes brought about by the
project?

Project management

Scope
AREAS OF ASSESSMENT AREAS TO PROBE

• Range of services provided • Do they match with those specified in the


business case report?

• Service capacity • Does the service capacity meet the stated


targets?

• Outputs / deliverables • Have all the outputs / deliverables been


produced?

Time
AREAS OF ASSESSMENT AREAS TO PROBE

• Completion of key milestones • Were the key milestones completed on


schedule?
• Were the inputs / resources of the projects
spent on time?
• Were the outputs of the project produced on
time?

• Full-live run / implementation date • How does the actual project schedule
compare with the approved schedule?

A USER GUIDE TO POST IMPLEMENTATION REVIEWS 21


COLLECT AND ANALYSE DATA

Cost
AREAS OF ASSESSMENT AREAS TO PROBE

• Project expenditure (non-recurrent and • Any deviation from the approved budget?
recurrent)

• Staff resources (non-recurrent and • Any deviation from the approved manpower
recurrent) plan?

Quality
AREAS OF ASSESSMENT AREAS TO PROBE

• Quality standards / service levels • Do the outputs / deliverables meet the


required quality standards / service levels?

(Note: For projects which have not yet been completed, not all the areas described above need to
be evaluated.)

4.2.1 Issues in data analysis

In conducting data analysis, the review team should pay special attention to the following issues:

Time value of money

The value of money changes as time goes by. A dollar today is not worth the same as five years
before due to factors such as inflation. It may not be appropriate to directly compare the project
costs and the financial benefits estimated in the business case with the actual figures as they were
calculated at different time periods. It is necessary to remove the effect of the time value of money
so that all values can be compared on an equal basis.

Discounting is a method used to compare costs incurred and financial benefits realised at
different time periods, in which a discount rate is applied to convert future costs or benefits to
the equivalent costs or benefits in today’s values (or present value). Readers can refer to the
Efficiency Unit’s publication – A Government Business Case GuideNote for further information on
the discounting method.

Note
The guide is available at http://www.eu.gov.hk/english/publication/pub_bp/files/Business_Case_Guide.pdf

22 A USER GUIDE TO POST IMPLEMENTATION REVIEWS


COLLECT AND ANALYSE DATA

Changes in project parameters

The business case and project plan were developed based on past assumptions and predictions
about the future. As the project proceeds, the project parameters such as project schedule
or even the project scope may have changed to meet the actual requirements at the time of
implementation. The review team must be fully aware of these changes (approved or not)
and understand the rationale behind. The approved changes should be used as the basis for
comparison instead of the original parameters stated in the business case report / project plan.

Attribution issues

The review team should examine whether the actual outcome is attributed to the project itself or
other factors. A project outcome may be influenced by factors other than the project, and simply
looking at the changes before and after the project may not be sufficient to make a credible
evaluation.

Example

A PIR was conducted to evaluate the success of a mosquito prevention project at a


particular district which aimed to reduce the ovitrap index by 5%. The review found
that the ovitrap index decreased by 7% after the project was implemented and it
seemed that the project had achieved its expected outcome. However, the decrease
may be attributed to other factors such as the decrease in the number of construction
sites in the vicinity and/or the amount of rainfall during the assessment period.

The review team should take into account any changes in the external environment during project
implementation and assess whether they have had any significant impact on the project.

A USER GUIDE TO POST IMPLEMENTATION REVIEWS 23


5 IDENTIFY ISSUES AND LESSONS
LEARNT AND REPORTING

5.1 Identify issues and lessons learnt


This stage identifies what went well and what went wrong so that the department can do better
in the future. Identifying the reasons for project successes will enable the department to identify
best practices and apply them in future projects, while lessons learnt from project failures will
enable the departments to avoid making the same mistake next time.

The following are some common methods for identifying issues and lessons learnt:

Comparison between planned and actual data

The review team has compared what actually happened against what was planned in the previous
stage. The comparison would help identify what was done well and what was done badly. This
forms the basis for further investigation to identify the underlying issues (or critical success factors)
and to develop recommendations.

Common techniques for analysing the underlying issues are:

• cause and effect diagram

• interrelationship diagram

• 5 whys.

More information on these techniques is at Appendix 3.

Document review

During project implementation, the project team may have used the following logs to facilitate
their monitoring and control of the project:

24 A USER GUIDE TO POST IMPLEMENTATION REVIEWS


IDENTIFY ISSUES AND LESSONS LEARNT AND REPORTING

• incident logs – to record the key incidents reported by the staff

• daily logs – to record the daily activities and work done by individual staff

• issue logs – to record the status of the resolution of all the issues raised by the stakeholders

• lesson logs – to summarise the lessons learnt by the project team.

The review team is advised to review these project logs to identify whether there are any issues
that are worth further examination.

Focus group and interview

Focus groups and interviews allow the review team to have a more in-depth discussion with
stakeholders to identify the issues and solutions. The discussion would normally focus on what
went right, what went wrong, and what can be improved. The hallmark of focus groups is the
explicit use of the group interactions to generate ideas and insights. Interviews, on the other
hand, are particularly appropriate in situations involving complex subject matters, high-status
interviewees, and sensitive subject matters.

To begin, the review team can consider using the 3+3 survey, which asks the stakeholders to
list three positive aspects and three negative aspects of the project. This helps the review team
identify efficiently the areas which the stakeholders consider important.

The issues to be discussed with different stakeholders should be pitched at appropriate levels.
Below are some suggested areas for discussion:

With project sponsors

• how well has the project met the business case objectives?

• how effective was the project delivery?

• are there any outstanding issues that need to be addressed?

• what are the learning points / improvement opportunities?

A USER GUIDE TO POST IMPLEMENTATION REVIEWS 25


IDENTIFY ISSUES AND LESSONS LEARNT AND REPORTING

With project team

• were there any problems / difficulties in project implementation? How were they resolved?

• are the project assumptions made in the business case still sound and valid? If not,
how were they addressed?

• were there any unintended outcomes (both positive and negative) arising from the project?

• what are the reasons for the successes / failures? What should be done differently next time?

• are there any outstanding issues that need to be followed up?

• are there any factors outside the control of the project team that have affected the
project outcomes?

• are there any further improvement areas?

With end users

• does the project meet their needs?

• are the end users satisfied with the services provided? Why / why not?

On-site observation

On-site observation is a method by which the review team can gather first hand data by observing
the services provided at the point of delivery. It enables the review team to develop a more
holistic perspective, i.e. an understanding of the context within which the services operate. It also
allows the review team to learn more about issues that some stakeholders may be unaware of or
that they are unwilling or unable to discuss openly. In some circumstances, on-site observation
may be undertaken in the form of “mystery customer” so as to obtain the first-hand information.

Benchmarking

Benchmarking is another way to identify problems and develop improvement measures. By


comparing the performance data and the practices adopted in similar projects or industry /
business standards, the areas which were “under” and/or “over” performing and the “best”
practices can be identified.

26 A USER GUIDE TO POST IMPLEMENTATION REVIEWS


IDENTIFY ISSUES AND LESSONS LEARNT AND REPORTING

5.2 Develop recommendations


Based on the issues identified, the review team can develop actionable recommendations with
a view to bringing about future improvement. Recommendations should be specific as well as
practical, and backed up by evidence. In general, the review team should:

• develop recommendations to rectify the problems identified, to realise benefits not fully
met or to reap extra benefits. These may involve changes in project strategies, project plan,
expansion / reduction of the project scope or even termination of the project

• consult relevant stakeholders and the potential users on the practicability of the
recommendations

• develop an implementation plan

• generalise the lessons learnt for wider application to improve future projects. This may
involve proposals to change policies and procedures.

5.3 Report findings


The review findings and recommendations should be reported to the senior management for
consideration. A PIR report should be prepared to facilitate decision making and future reference.
The report documents the effectiveness and efficiency of the project, the effectiveness of project
management, lessons learnt, and best practices to be used in future projects.

The format and content of the PIR report need to be carefully considered to ensure that the report
shows a good range of useful information in a concise and meaningful way. All key elements,
costs and benefits (financial, economic and social) should be addressed in the report.

The content of a full PIR report could include the following:

• background of the project – what are the key drivers for implementing the projects? What
are the project objectives?

• review objectives and scope

A USER GUIDE TO POST IMPLEMENTATION REVIEWS 27


IDENTIFY ISSUES AND LESSONS LEARNT AND REPORTING

• formation of review team and the review methodology

• any limitations / obstacles in conducting the PIR

• achievement of project outcomes and reasons for variations

• performance of project management – does the project proceed as planned in terms of


scope, cost, time and quality? If not, what are the reasons for deviations?

• implications for the current project and future projects - what are the lessons learnt from
this project and how they should be addressed in future projects?

A template of a PIR report is at Appendix 4.

5.4 Implement changes and share findings


The results of a PIR are only meaningful when they are put into practice. Upon endorsement
of the PIR report, the department should implement the recommendations and disseminate the
lessons learnt.

Every effort should be made to ensure that the lessons identified are communicated and learnt so
that the department can ride on its success and avoid making the same mistakes.

Example

A department has established the following mechanisms to disseminate the lessons


learnt from the PIRs:

• organise de-briefing workshops to share PIRs’ findings

• set up a knowledge base in its intranet to disseminate PIRs’ findings

28 A USER GUIDE TO POST IMPLEMENTATION REVIEWS


REFERENCES

Note: While some of the references quoted below focus on IT projects, the approaches used can
also be applied to non-IT projects.

Department of Treasury and Finance, Government of Western Australia, Project Evaluation


Guidelines (2002)
http://www.dtf.wa.gov.au/cms/uploadedFiles/project_evaluation_guidelines_2002.pdf

New South Wales Treasury, Australia, Post Implementation Review Guideline 2004
http://www.treasury.nsw.gov.au/__data/assets/pdf_file/0008/5102/post_implementation_review.pdf

Queensland Transport and Main Roads, Australia, Post Implementation Review


http://www.transport.qld.gov.au/Home/Projects_and_initiatives/Onq_project_management_
methodology/Methodology/Generic_methodology/fp_post_implementation_review

Office of the Government Chief Information Officer, the HKSAR Government


Post Implementation Departmental Return (PIDR) on Administrative Computer System Funded
under CWRF Head 710 – Computerisation
(Government intranet: http://gipms.ogcio.ccgo.hksarg/GIPMS/faf/BDInformationMaintanceAction?
method=viewInf&page=FormsandGuides)

Office of Evaluation, Planning and Coordination Department, Japan International Cooperation


Agency (JICA), JICA Guideline for Project Evaluation (2004)
http://www.jica.go.jp/english/operations/evaluation/jica_archive/guides/guideline.html

UK Office of Government Commerce, Post Implementation Review


http://www.ogc.gov.uk/delivery_lifecycle_post_implementation_review_pir.asp

U.S. Department of Education, Federal Student Aid, Post Implementation Review Process
Description version 2.0
http://www.federalstudentaid.ed.gov/docs/ciolibrary/FSASA-2PIRProcess_v2.0.pdf

The National Science Foundation, U.S., User-Friendly Handbook for Project Evaluation (2002)
http://www.nsf.gov/pubs/2002/nsf02057/start.htm

Washington State Department of Information Services, U.S., Post Implementation Review


http://www.isb.wa.gov/tools/pmframework/projectclosure/postimplementation.aspx

A USER GUIDE TO POST IMPLEMENTATION REVIEWS 29


APPENDIX

APPENDIX 1
Impact Evaluation
Impact evaluation is defined by the World Bank as an assessment of the impact of a project on
final outcomes. It assesses the changes that can be attributed to a particular project.

A key element in impact evaluation is identifying the “counterfactual”: What would have
happened had the project not taken place? The most frequently used methods of identifying the
counterfactual are:

Randomised control trials (RCTs)


An RCT is a study that measures a project’s effect by randomly assigning individuals (or other
units, such as schools or hospitals) into a target group, which receives the services the project
provided, and into a control group, which does not. After project implementation, measurements
are taken to establish the difference between the target group and the control group. Because the
control group simulates what would have happened if there were no project implementation, the
difference in outcomes between the groups demonstrates that they are attributed to the project
itself. However, this method may raise ethical concerns and requires a comparatively higher cost.
Besides, it has to be planned before project implementation as the control group needs to be
formed in advance.

Quasi-experiments
Like RCTs, quasi-experiments assess the differences that result from a project and the result that
would have happened without the project. However, the control group is not randomly assigned.
Instead, it is designed on the basis of how to minimise any differences between the two groups (e.g.
similar household incomes, social/education backgrounds). Use of this method may increase the
risk of misleading results because of the difficulty in eliminating bias in the selection of control
group.

The benefits and challenges of impact evaluation are well described by the World Bank
Independent Evaluation Group (www.worldbank.org/ieg/ecd/conduct_qual_impact_eval.html).

30 A USER GUIDE TO POST IMPLEMENTATION REVIEWS


APPENDIX

APPENDIX 2
Typical data collection methods

METHODS ADVANTAGES DISADVANTAGES

Questionnaire survey (face­ • Good for gathering • Tends to provide a “snapshot”


to-face, telephone, on-line, quantitative and descriptive of the position only
paper) data • May not provide adequate
• Can cover a wide range of information on context
topics • Depends on the quality of the
questions being asked

Focus group • Group dynamics stimulates • A few respondents may


the thinking process of control the discussion
participants • Need well-qualified, highly
• Through participants’ trained facilitators
interactions, information on • May lose focus
specific topics is obtained • May be difficult to transcribe
from various viewpoints the large volume of
• Provide opportunity to information
explore topics in depth

Interview • Usually yield richest data, • Time-consuming


details and new insights • Need well-qualified, highly
• Provide opportunity to trained interviewers
explore topics in depth
• Allow interviewer to
explain or help clarify
questions, increasing
the likelihood of useful
responses

Document review • Inexpensive • May be incomplete


• Provide information • Analysis may be time-
on historical trends or consuming and access may be
sequences difficult

A USER GUIDE TO POST IMPLEMENTATION REVIEWS 31


APPENDIX

APPENDIX 3
Techniques for identifying underlying issues

Cause and effect diagram

What is it?

A cause and effect diagram is a tool used to identify the real causes of a given effect (or
outcome). A typical cause-and-effect diagram shows the effect at the right and its main causes (or
factors) along a horizontal axis. These main causes are in turn effects that have their subcauses,
and so on, down many levels.

How to use it?

(1) Specify the effect to be analysed

The effect can be positive (objectives) or negative (problems). Place it in a box on the
right side of the diagram.

The problem,
objective,
goal, etc

(2) List the major categories of the causes that influence the effect being studied
The “4Ms” (methods, manpower, materials, machinery) or the “4Ps” (policies, procedures,
people, plant) are commonly used as a starting point.

Manpower, personnel, Materials, policies,

staffing, etc regulations, etc


The problem,
objective,
goal, etc

Methods, procedures, Machines, plant,


specifications, etc equipment, IT system, etc

32 A USER GUIDE TO POST IMPLEMENTATION REVIEWS


APPENDIX

(3) Identify causes and subcauses

Within each major category, identify the possible causes and subcauses.

Example
A cause and effect diagram for high turnover rate in a call centre

Unattractive remuneration
Low job satisfaction
package

Unfair reward Limited job Poor fringe


system exposure benefits

Boring job Low salary


High turnover
rate
Old facilities
Remote office
location Inadequate No coaching from
training time supervisors

Poor working environment Insufficient training

A USER GUIDE TO POST IMPLEMENTATION REVIEWS 33


APPENDIX

Interrelationship diagram

What is it?

An interrelationship diagram is used to study the links and relationships between factors and
identify which factor is the major cause. It is useful when there are a number of factors and you
are unsure as to which factors have the most effect on the others.

As a process, an interrelationship diagram is often used after a number of factors have been
identified through other tools such as brainstorming and cause-effect diagram.

How to use it?

(1) Arrange the factors in a circle

Example
An outsourcing project cannot be completed on schedule. Possible factors were:

Inadequate contract
managers

Unclear service levels Unrealistic project


schedule

Unclear project scope Poor contractor performance

Ineffective contract
management

34 A USER GUIDE TO POST IMPLEMENTATION REVIEWS


APPENDIX

(2) Identify cause-and-effect relationship and draw arrows to indicate directions of influence
Using any factor as the starting point, systematically consider the relationship between
each one by asking: Is there a relationship? If yes, then determine which one is the cause,
and which one is the effect. For each relationship pair, draw an arrow from the cause to
the effect. (Note: Never draw two-headed arrows. If they have influence on each other,
determine which one has stronger influence)

Example
Using the factor “inadequate contract managers” as the starting point, determine
its relationships with other factors one by one and draw arrows to show the
direction of influence

Inadequate contract
managers

Unclear service levels Unrealistic project


schedule

Unclear project scope Poor contractor performance

Ineffective contract
management

A USER GUIDE TO POST IMPLEMENTATION REVIEWS 35


APPENDIX

(3) Repeat step (2) in a clockwise direction until all the factors have been considered and
arrows attached to show their relationships

Example
The interrelationships among all the factors are identified as follows:

Inadequate contract
managers

Unclear service levels Unrealistic project


schedule

Unclear project scope Poor contractor performance

Ineffective contract
management

(4) Tally influence arrows


For each factor, clearly record the number of arrows going in and going out.

Example

Factors No. of incoming No. of outgoing


arrows arrows

Inadequate contract managers 3 2

Unrealistic project schedule 2 2

Poor contractor performance 5 0

Ineffective contract management 3 1

Unclear project scope 0 5

Unclear service levels 1 4

36 A USER GUIDE TO POST IMPLEMENTATION REVIEWS


APPENDIX

(5) Identify root cause and outcomes


A high number of outgoing arrows indicates that the factor concerned is a possible root
cause. A high number of incoming arrows indicates that the factor concerned is an
outcome. Knowing the root cause enables the review team to develop recommendations
which can solve the problem at its source and bring the maximum improvement.

From the above example, the main cause for the delay is unclear project scope since
it has the most outgoing arrows. Poor contractor performance is in fact the outcome
since it has the most incoming arrows. Therefore, the causal relationship should be:

Unclear service levels

Unrealistic project
schedule
Unclear Poor Delay in
project contractor project
scope performance schedule
Inadequate contract
managers

Ineffective contract
management

A USER GUIDE TO POST IMPLEMENTATION REVIEWS 37


APPENDIX

5 Whys

What is it?

The 5 Whys is a simple problem-solving technique that helps identify the root cause of a problem.
It involves repeatedly asking the question "why" with a view to peeling away the layers of
symptoms which can lead to the root cause of a problem. Although this technique is called "5
Whys", it may need to ask the question fewer or more than five times before the root cause to a
problem can be identified.

Benefits of the 5 Whys:

• it helps determine the root cause of a problem quickly

• it is easy to learn and apply.

How to use it?

• Write down the problem

• Ask why the problem happens and write down the answer

• If the answer provided does not point to the root cause of the problem, keep asking “why”
and write down the answer until the root cause is identified.

Example

Problem: A high percentage of applications for a business licence could not


be processed.
(1) Why could a large number of applications not be processed?

The applicants did not submit the required documents.


(2) Why did the applicants not submit the documents?


The applicants did not know clearly what documents were required.
(3) Why did the applicants not know what documents were required?
The user guide to licensing was published five years ago and was already
outdated.
(4) Why was the user guide outdated?

There is no review mechanism.


38 A USER GUIDE TO POST IMPLEMENTATION REVIEWS


APPENDIX

APPENDIX 4
PIR report template

Contents

1. Executive Summary

1.1 Overall
assessment

Provide an overview of whether the project is successful or not in terms of the extent of
project achievements and the performance of project management.

1.2 Lessons learnt

Provide a summary of what have to be done differently to rectify the shortcomings of


the project under review and to improve the delivery of future projects.

1.3 Follow-up actions


Provide a high level action plan to implement the recommendations.


2. Background

2.1 Project background

State the key drivers for implementing the project under review, its objectives, expected
outcomes and deliverables, and the key stakeholders involved.

2.2 Review objectives and scope of assessment

State what the PIR aimed to accomplish and the project areas examined in the review.

2.3 Formation of review team


State the review team composition.


A USER GUIDE TO POST IMPLEMENTATION REVIEWS 39


APPENDIX

NAME ROLES IN THE REVIEW TEAM RELATIONSHIP WITH THE PROJECT

Name of the team State the role in the State the relationship with the
member review team, e.g. review project under review, for example,
team leader, review team original project team member,
member, etc. independent third party, etc.

2.4 Review methodology

Describe the review methodology, including the performance measures used to evaluate
project performance. Describe the data collection approach.

The review team can use the following table to summarise the review methodology.

PROJECT OUTCOMES PERFORMANCE MEASURES DATA COLLECTION APPROACH

State the project Short description of the State the data collection method
outcomes to be performance measures
measured used

2.5 Limitations and difficulties encountered (if any)

Discuss the limitations of the review methodology and difficulties encountered.

3. Assessment of Project Achievement

3.1 P
roject outcomes

Describe whether the expected project outcomes are achieved or not and to what extent.
State whether there are any unintended outcomes (positive and negative).

PROJECT OUTCOMES ACHIEVEMENT REASONS FOR DEVIATIONS

Project objectives State the level of Explain the reasons for deviations
achievement

Benefits State the benefits realised, -ditto–


including tangible and
intangible benefits

User satisfaction State the user feedback -ditto­

40 A USER GUIDE TO POST IMPLEMENTATION REVIEWS


APPENDIX

3.2 Project management


Describe the project management performance.


ASPECTS EXPECTED ACTUAL PERFORMANCE REASONS FOR DEVIATIONS


PERFORMANCE

Scope State the State the actual outputs Explain the reasons for
deliverables delivered deviations
expected

Cost State the State the actual project -ditto–


expected project expenditure and staff
expenditure and resources used
staff resources
required

Time State the State the actual project -ditto-


expected project completion date
completion date (including for the key
(including for the milestones)
key milestones)

Quality State the State the actual quality -ditto-


expected quality standards
standards

4. Lessons Learnt and Recommendations

4.1 Lessons learnt

State the lessons learnt which may be used to improve future project delivery;
distinguish between project-specific lessons and general lessons.

4.2 Recommendations

Describe the measures to rectify the problems identified, to realise benefits not fully
delivered, to reap extra benefits, etc. Explain whether the existing policies and practices
should be changed.

4.3 Action plan

Describe the action plan to implement the recommendations and disseminate the review
findings.

A USER GUIDE TO POST IMPLEMENTATION REVIEWS 41


NOTES

42 A USER GUIDE TO POST IMPLEMENTATION REVIEWS


NOTES

A USER GUIDE TO POST IMPLEMENTATION REVIEWS 43

Efficiency Unit
13/F., West Wing, Central Government Offices
11 Ice House Street
Central
Hong Kong

Email: euwm@eu.gov.hk

Tel: 2165 7255

Fax: 2524 7267

Website: www.eu.gov.hk

Printed by the Government Logistics Department

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy