1 - 8617 - Autumn 2018 1552485421787
1 - 8617 - Autumn 2018 1552485421787
1 - 8617 - Autumn 2018 1552485421787
Q.1 How can you differentiate the concepts of feasibility testing and pilot testing? Why is
plan formulation a crucial function? What are the principal characteristics of an
educational plan? Discuss.
Answer:
What is a feasibility study? As the name implies, a feasibility study is used to determine the
viability of an idea, such as ensuring a project is legally and technically feasible as well as
economically justifiable. It tells us whether a project is worth the investment—in some cases,
a project may not be doable. There can be many reasons for this, including requiring too
many resources, which not only prevents those resources from performing other tasks but
also may cost more than an organization would earn back by taking on a project that isn’t
profitable.
A well-designed study should offer a historical background of the business or project, such
as a description of the product or service, accounting statements, details of operations and
management, marketing research and policies, financial data, legal requirements, and tax
obligations. Generally, such studies precede technical development and project
implementation.
A feasibility study evaluates the project’s potential for success; therefore, perceived
objectivity is an important factor in the credibility of the study for potential investors and
lending institutions. There are five types of feasibility study—separate areas that a feasibility
study examines, described below.
1. Technical Feasibility - this assessment focuses on the technical resources available to the
organization. It helps organizations determine whether the technical resources meet
capacity and whether the technical team is capable of converting the ideas into working
systems. Technical feasibility also involves evaluation of the hardware, software, and other
technology requirements of the proposed system. As an exaggerated example, an
organization wouldn’t want to try to put Star Trek’s transporters in their building—currently,
this project is not technically feasible.
2. Economic Feasibility - this assessment typically involves a cost/ benefits analysis of the
project, helping organizations determine the viability, cost, and benefits associated with a
1
project before financial resources are allocated. It also serves as an independent project
assessment and enhances project credibility—helping decision makers determine the
positive economic benefits to the organization that the proposed project will provide.
3. Legal Feasibility - this assessment investigates whether any aspect of the proposed project
conflicts with legal requirements like zoning laws, data protection acts, or social media laws.
Let’s say an organization wants to construct a new office building in a specific location. A
feasibility study might reveal the organization’s ideal location isn’t zoned for that type of
business. That organization has just saved considerable time and effort by learning that their
project was not feasible right from the beginning.
5. Scheduling Feasibility - this assessment is the most important for project success; after all,
a project will fail if not completed on time. In scheduling feasibility, an organization estimates
how much time the project will take to complete.
When these areas have all been examined, the feasibility study helps identify any constraints
the proposed project may face, including:
The importance of a feasibility study is based on organizational desire to “get it right” before
committing resources, time, or budget. A feasibility study might uncover new ideas that could
completely change a project’s scope. It’s best to make these determinations in advance,
rather than to jump in and learning that the project just won’t work. Conducting a feasibility
study is always beneficial to the project as it gives you and other stakeholders a clear picture
of the proposed project.
Apart from the approaches to feasibility study listed above, some projects also require for
other constraints to be analyzed -
{================}
Q.2 Critically analyze the process of plan elaboration in Pakistan. Suggest different
strategies to make this process efficient and effective.
Answer:
Whatever the size or type of project, there are 5 essential elements that you must get right in
order to achieve a successful outcome. Whether your project is about improving an existing
product or service, managing change or implementing a new system, the same basic
considerations are required when managing projects. Get these right and you will manage a
successful project. Get them wrong and your project will be thwarted by challenges, issues
and problems.
In order to ensure that all your projects reach the required level of success, here are the 5
essential elements that need to be included:
The first stage of any project is to understand the need for the project and what it is trying to
achieve. SMART (Specific, Measurable, Attainable, Relevant, Timely,) objectives need to be
established along with measures of success and key milestones where progress can be
reviewed. Working as an internal project manager will require close liaison with key internal
stakeholders and departments to establish their specific requirements and set commonly
agreed objectives.
The variety of activities that are deemed to be projects are wide-ranging and varied, and can
include new products, processes and services. The development of any of these needs to be
closely linked to meeting defined business objectives and adding value to the organisation.
The benefits of a project should be well articulated at the beginning so there is a clear link to
the success of the project and the impact on overall business aims.
3. Communication
It is vital to sell the benefits of any project to those who will be affected during the project or
by the project's final outcome. Implementing a new process requires that end users
understand why the project is beneficial and potential buyers need to be convinced by the
advantages of new products and services. In essence, communicating the message of why
new or different is good will help counteract the typical human reluctance to change.
4. Resources
It is vital to ensure that adequate resources in terms of people, time, finances and equipment
are in place. Internally, this could involve the IT department providing the appropriate
hardware/software, Human Resources recruiting the necessary people or the Facilities
department providing offices or other relevant support. There also needs to be allocated
budgets and finance as well as appropriate timelines for project completion.
5. People
No project manager works in isolation. There are many stakeholders involved in a project
who all have a specific role to play and who all have a vested interest in the project's success.
The key stakeholders who drive projects and help make them a success include:
⦁ Sponsor: The project sponsor is the person who defines the business objectives that
drive the project. The sponsor can be a member of the senior management team or
someone from outside of the organisation.
⦁ Project Manager: A professional project manager creates the project plan and ensures
that it meets the budget, schedule and scope determined by the sponsors. The project
manager is also responsible for risk assessment and management.
⦁ Project Team Members: These can include subject area experts, members of
departments, external professionals and new recruits. Anyone who can offer a positive
contribution to the project in terms of their knowledge and capabilities makes a good
Including these elements in a project will ensure that the final outcome is a successful one.
Many people might consider a program to be just one really large project. A project is a
singular effort of defined duration, whereas a program is comprised of a collection of
projects. Problem solved, right? Actually, it’s a bit more complex than that. While programs
and projects actually have several different characteristics and different functions within an
organization, they also have many commonalities. Likewise project managers and program
manager are two different roles within an organization, as well, yet they share similar duties.
While the state of the industry is always changing, it behooves you and your organization to
know when your projects should become programs. Let’s look at how they’re different – and
how they’re the same – so you can apply the concepts to your own programs and projects.
⦁ Structure: A project is well-defined, with a Project Charter that spells out exactly what
the scope and objectives are for the project. A program tends to have greater levels of
uncertainty. (You can download a free project charter template here.) The team is also
bigger. The program team are supervising and coordinating the work on a number of
projects so while the core team may not have that many people in, the wider team
includes the project managers and all the project team members.
⦁ Effort: This is the most significant difference between projects and programs. A
project represents a single effort. It is a group of people forming a team working
towards a common goal. A program is different; it is a collection of projects. Together
all the projects form a cohesive package of work. The different projects are
complimentary and help the program achieve its overall objectives. There are likely to
be overlaps and dependencies between the projects, so a program manager will
assess these and work with the project managers concerned to check that overall the
whole program progresses smoothly.
⦁ Duration: Some projects do go on for several years but most of the projects you’ll work
on will be shorter than that. On the other hand, programs are definitely longer. As they
set out to deliver more stuff, they take longer. Programs tend to be split into tranches
or phases. Some projects are also split like this, but not all projects last long enough to
be delivered in multiple phases.
{================}
Q.3 Evaluate the stages of project planning process. Why projects failed after careful
planning. Write your point of view with practical examples from the educational
projects?
Answer:
.
{================}
Q.4 Compare the concepts of project appraisal and project evaluation. Discuss the key
issues, while appraisal the educational project.
Answer:
Appraisal is the evaluation of the overall ability of the feasible project to succeed. It is done
after the feasibility study of the project has been completed. In other words, project appraisal
is an overall assessment of the relevancy, feasibility, and sustainability of a project prior to
making the decision whether to undertake it or not. Also, it is a technique of evaluating,
analyzing the investments and effort of calculating the project’s viability. The aim is to
consider and compare the possible feasible project and select the best that meets the
objectives. The feasibility study serves as the groundwork for appraisal. The aspects covered
in feasibility study are re-examined during project appraisal. Project appraisal is a process of
detailed examination of several aspects of a given project before resources are committed.
Project appraisal document generally consists project introduction, objectives, and scope,
techniques of implementation, organization description, output, and benefits of project,
project monitoring and evaluation etc.
⦁ Will the project as designed meet the objectives and needs of country and society?
Thus the primary function of project appraisal is to determine a feasible projects’ ability to
achieve its objectives. The objective of the different project differs, for a private project, the
objective is profitability but for a public project objectives are socio-economic growth,
employment, poverty reduction etc.
Types of Appraisal:
In project appraisal different factors examined during feasibility study are re-examined.
These aspects are technical, economic, marketing, financial, managerial and environmental.
These different sectors are explained below:
1. Technical appraisal
It ascertains whether the prerequisites for the successful commissioning of the project with
respect to technical solutions, technical specifications, technical risks and uncertainties,
local resources availability, size, location, geology etc. So different technical aspect of a
project is assessed and summarized in this.
2. Economic appraisal
It is in terms of the worth of the project to the society so it also is known as social
cost-benefit analysis. It judges the project form larger social point of view. In this project’s
contribution to self-sufficiency, employments generation and social order are assessed and
summarized. The criteria for assessment are:
3. Market appraisal
Marketing analysis is primarily concerned with marketing related issues. Factor such as
project capacity, market demand, demand forecasts, estimated revenue, marketing
programme, market share, competition and ability to satisfy customers need are summarized
and assessed.
4. Management appraisal
5. Environmental appraisal
6. Financial appraisal
It focuses on the financial feasibility of the project. In simple words, whether the project will
be able to satisfy the return expectation to capital. Factors such as investment outlay, the
cost of capital, means of financing, projected profitability, break-even points, cash flows,
investment worth judged in terms of various criteria of merit and risk. Sensitivity analysis and
ratio analysis is also done.
Concept of Evaluation
It is about building benchmarks and accountability into your plan, and using them to evaluate
the plan as you go and after the project is finished. It gives your project a more strategic
structure, provides evidence for your results and, importantly, contributes to the knowledge
base about effective crime prevention.
Valid measurement tools provide information that is a good reflection of what they are trying
to measure. For example, if you wanted to measure the extent to which people were victims
of a certain type of crime, you might want to look at more than just the number of reports to
police since we know that many crimes are unreported.
Reliable instruments provide information that is likely to be consistent over time. It will not be
affected by small changes in such things as the mood of people who respond to a survey or
other circumstances unique to the day on which they complete the survey.
Quality evaluations also use consistent data collection procedures. For example, interview
questions should be asked to all participants in the same way, and interviewees should be
careful to record the same information at every session.
Where possible, collect data before and after a project. When data is collected only at the
end of the project, you can't tell whether there was actually any change that occurred.
Good evaluations require resources - that is, time and money. Some evaluation-related
activities may be carried out by project staff (for example, questionnaires can be
administered by a project coordinator), research assistants (for example, students may
compile and analyse data) or by people with special expertise (for example, an evaluation
consultant might draft your questionnaire).
Your project goal might be to reduce the number of a certain type of crime in your
community. This may require the modification of behaviour in a community that takes place
over five to ten years to achieve any reduction. To measure those long-term trends may not
be realistic. In this case, you should focus on some short- and medium-term outcomes.
The steps described demonstrate how you would go about developing your evaluation plan.
⚪ To see if you are on track to achieve your intended results, if you are on time
and if you are using resources as planned mid-way through your project
(mid-term evaluation), so that you may make adjustments as needed
⚪ To see if the overall changes you were trying to achieve actually happened by
the end of the project (final evaluation) and identify what you learned.
⚪ Project records such as project activity log/daily journal: A book where you
⚪ Number and type of documents produced during the project (tools, flyers,
advertisements, media coverage of your event/project, curriculum, etc)
⚪ Data from official sources (e.g. school records, census data, health data)
⚪ Questionnaires or surveys
3. Determine the frequency of the data collection and who will collect the information.
4. Finally, determine how you will analyse your data and report your findings to funders,
your community and your project partners and stakeholders.
TYPES OF EVALUATION
2. Mid-term evaluation: This is also commonly referred to as the mid-term reviews. Just like
10 For More AIOU Solved Assignments Contact: 0345-5233973 / 0312-5233973
Shkeducation@gmail.com
the name suggests, the mid-term reviews are conducted mid-project. The mid-term reviews
are important for the purposes of establishing whether a project is heading towards the set
goals and objectives, thereafter informing management and control decisions by the project
management. It is important in building organizational confidence in the project
implementation strategies, or in the case where indicators are not pointing towards success,
acting as a call to the change of implementation strategies. It is however important to note
that in the case where a project has a long life cycle, it might be important to conduct
periodic evaluations before the actual mid-term evaluation, although this might depend on
management goodwill and availability of funds.
{================}
Qualitative Design
It is uncommon for evaluators to use only qualitative methods when evaluating a program.
Some clients are more at ease with making decisions based on quantitative data outputs,
while some understand the significant costs and time associated with qualitative data
collection. It is also very difficult to generalize to other populations based on qualitative data,
which therefore makes program replication and scale-up a challenge. On the other hand,
supporters of qualitative evaluations believe that context is such a large factor in program
implementation success that generalizing to other populations is not possible despite the
methodology choices.However, qualitative evaluation methods are extremely beneficial in
providing rich program feedback and evaluators should consider integrating them into the
evaluation plan, i.e. using a "mixed methods" approach. For more information on qualitative
data collection methods to incorporate into program evaluations, please refer to the
certificate in Global Health Research.
Quantitative Designs
If clients or stakeholders are interested in a quantitative evaluation plan, there are a number
of design sequences that can be used, depending on time, money, and availability of data.
Linking directly to the needs of the stakeholders and the purpose of the evaluation, study
designs can either be experimental (with a traditional control group), quasi-experimental
(with a comparison group that may not necessarily be a control group), or non-experimental
(where no formal control or comparison group exists). It is important to note how evaluation
differs from monitoring; monitoring data like monthly reporting forms, stock outs for supply
chain management, and some disease tracking is done consistently and frequently
throughout the life of a project. Please refer
to http://www.uniteforsight.org/metrics-course/monitoring-evaluation for more information
on the differences between monitoring and evaluation processes.
In many cases, it is impossible to randomly assign individuals into experimental and control
groups, as is often done in clinical or randomized control trials (RCT). When RCT is
unavailable, comparison groups are selected that match the target population on a number of
population characteristics, closely resembling the group that receives the intervention. This is
Quasi-experimental design (QED) is the most common quantitative design in global health
evaluation. There are various schedules of data collection, differing in level of robustness
according to time, money, and availability of data (see: Constraints on Evaluation).
It is recommended that QED data collection occur at four different time periods:
pre-intervention, mid-intervention, post-intervention, and ex-post intervention. Pre-intervention
data (also referred to as baseline data) is collected on relevant evaluation indicators prior to
intervention. Evaluators and program implementers look for changes in the intervention group
from pre- to post-intervention data, signaling the possibility of project impact. Baseline data
may also be collected far in advance of program development as a way to determine the
needs of the community in more participatory approaches. Mid-intervention measures occur
around the mid-point of intervention, which will vary according to the timeline of intervention.
Post-intervention data is collected immediately after the intervention has ended, and ex-post
data is collected much later, possibly 5 to 10 years after the intervention has ended, in order
to assess the impact or sustained effects of the program. At each of these times, data may
be collected from the intervention group, the comparison group, or both. The source and time
of data collection will depend on the level of available resources and the needs of the
evaluation.
In the most ideal situation with surplus resources, evaluators would collect data on relevant
indicators from both groups at all four times. Without any randomization as in experimental
designs, this design is the closest to a randomized control trial and can be visually
represented as follows:
Intervention X X X X
Comparison X X X X
Various time, budget, and data constraints often deter the use of this robust and extensive
13 For More AIOU Solved Assignments Contact: 0345-5233973 / 0312-5233973
Shkeducation@gmail.com
schedule of health program evaluations. First, evaluators may not be introduced into the
process until the intervention is already in place, eliminating the chances that baseline data
has been collected unless there was a pre-existing record. Second, budgets may not allow for
mid-intervention nor guarantee ex-post measurements, which may be considered less critical
than baseline and post-intervention measures for determining project outcomes. Third,
depending on how large the required sample size is, the collection of data that may be
considered non-essential may be removed for the sake of balancing a budget with a
significant sample size. Also, budget, time, and data constraints may eliminate the possibility
of a comparison group: a valid comparison group may not exist, there may not be sufficient
funding for comprehensive data collection, or late intervention may preclude the creation of a
comparison group. As the number and type of measurements are reduced, the model of data
collection becomes less robust and susceptible to invalidity.
The least robust QED is represented only by a post-intervention measure in the intervention
group, shown below:
Intervention X
Comparison
Occurring frequently in global health evaluation, this schedule does not show demonstrative
change in the intervention group nor impact of intervention when evaluated against a
comparison group —two main goals of evaluation. Instead, evaluators collect data on
participants in the program (those who have received the intervention) and attempt to make
the most relevant and conclusive statements about the success of the program. At this
stage, evaluators may choose to supplement quantitative data with qualitative methods in
order to bolster the findings. Recall is a great strategy for estimating a rough baseline (e.g.
asking people what their income used to be or how many times they went to the doctor five
years ago), however, it creates a risk of bias if not triangulated with other forms of data
collection methods. This is an appropriate place to incorporate qualitative methods to
triangulate and reinforce the recalled data, quantitatively, in the post-intervention period. With
only one measurement and no effective comparison group, this particular study design may
also be categorized as non-experimental.
In between the most and least robust study designs, there are other options that may suit
available resources; evaluators and clients may discuss which measurements they believe
are most crucial to determining program success (or obtaining continued funding) and
proceed from there. It should be noted, however, that if limited resources exist, it is better to
do two measurements in a comparison and intervention group than take two measurements
at different times from only the intervention group. For example it is (usually) better to do
this:
Intervention X
Comparison X
than this:
Intervention X X
Comparison
The latter works well when the theory in practice is well established or the client and
evaluators are only interested in determining adequacy of the intervention. However, in pilot
programs or evaluations that want to determine probability or plausibility, only measuring
twice among the intervention group is not sufficient. As discussed in the Purpose of
Evaluation module, probability and plausibility evaluations are more involved but have the
ability to derive more pertinent and significant program information than adequacy
Threats to Validity
Evaluations face limited robustness and validity as the number of measurements is reduced
and the comparison group is compromised. In particular, the amount of control in an
evaluation determines the level of internal validity. Consequently, the evaluation schedule
with a single post-test measurement of the intervention group would have little internal
validity due to few control measures in place (i.e no comparison group, limited
measurements, no randomization).
When only one group is measured, evaluation risks shortcomings due to "maturation". This is
the risk that evaluation indicators are measuring only growth that would have occurred
naturally through the maturation process, rather than growth due to program activities. For
instance, measuring math skills of 5th graders will most likely produce improved indicators
simply because of the normal educational process and brain function, not necessarily the
impact of a tutoring program. History threats are similar and reflect events or changes in the
surrounding environment that may have altered the results, as opposed to program
activities. For example, if a country is improving the economy and spending more on
healthcare, it is possible that a decline in child mortality would have been attributed to these
external factors rather than a specific intervention. It would be in the interest of the evaluator
to examine potential confounding variables and processes of maturation in the final
evaluation.
{================}