SEN CHP 4
SEN CHP 4
1. Heuristic techniques
2. Analytical estimation techniques
3. Empirical estimation techniques
Heuristic technique
It assumes that the relationships among the
different project parameters can be modelled
using suitable mathematical expressions.
Once the basic parameters are known, the other
parameters can be easily determined by
substituting the value of the basic parameters in
the mathematical expression.
Different heuristic models can be divided into two
classes : single variable model and the multi
variable model.
Single Variable Estimation
Models:
It provides a means to estimate the desired characteristics
Of a problem, using some previously estimated basic
characteristic of the software product such as its size.
A single variable estimator model takes the following
form:
Estimated Parameter=c1*ed1
e= characteristic which already have been calculated.
Estimated parameter is the dependent parameter to be
estimated. The dependent parameters to be estimated
could be effort, duration, staff size etc.
c1 and d1 are constants- calculated from past projects.
COCOMO is one of this type of models example.
Multivariable Cost Estimation Model:
It has the following form
Estimated Resources=c1*e1d1+c2*e1d1+----
e1 and e2 are the basic independent characteristics of the
software already estimated.
c1, c2, d1, d2, are constants.
Multivariable Estimation Models are expected to give more
accurate estimate compared to the Single Variable Models, since
a project parameters is typically influenced by several
independent parameters.
The independent parameters influence the dependent parameter
to different extents.
This is modelled by the constants
c1,c2,d1,d2.....
These constants are determined from historical data.
Intermediate Model of COCOMO is an example of this
Analytical Estimation technique
It derives the required results starting with basic
assumptions regarding the project.
Thus, unlike empirical and heuristic techniques,
analytical techniques do have scientific basis,
Halstead's software science is an example of an
analytical technique.
It can be used to derive some interesting results
starting with a few simple assumptions.
It is especially useful for estimating software
maintenance efforts.
In fact, it outperforms both empirical and heuristic
techniques when used for predicting software
maintenance efforts.
Halstead complexity
measures
In 1977, Mr. Maurice Howard Halstead introduced metrics to
measure software complexity.
Halstead’s metrics depends upon the actual implementation
of program and its measures, which are computed directly
from the operators and operands from source code, in static
manner. It allows to evaluate testing time, vocabulary, size,
difficulty, errors, and efforts for C/C++/Java source code.
According to Halstead, “A computer program is an
implementation of an algorithm considered to be a collection
of tokens which can be classified as either operators or
operands”. Halstead metrics think a program as sequence of
operators and their associated operands.
He defines various indicators to check complexity of module.
Halstead matrix are
Halstead Program Length – The total number of
operator occurrences and the total number of
operand occurrences.
N = N1 + N2
And estimated program length is, N ^ = n1log2n1 +
n2log2n2
D=1/L
E = V / L = D * V = Difficulty * Volume
L’ = V / D / D
lambda = L * V* = L2 * V
Intelligence Content – Determines the amount of
intelligence presented (stated) in the program This
parameter provides a measurement of program
complexity, independently of the program language in
which it was implemented.
I=V/D
Programming Time – Shows time (in minutes) needed to
translate the existing algorithm into implementation in the
specified program language.
T = E / (f * S)The concept of the processing rate of the
human brain, developed by the psychologist John Stroud,
is also used. Stoud defined a moment as the time required
by the human brain requires to carry out the most
elementary decision. The Stoud number S is therefore
Stoud’s moments per second with:
5 <= S <= 20. Halstead uses 18. The value of S has been
empirically developed from psychological reasoning, and
its recommended value for programming applications is
18.
Overview of empirical
estimation
Empirical estimation technique are based
on the data taken from the previous project
and some based on guesses and
assumptions.
Expert judgment technique
Expert Judgment is a technique in which judgment is
provided based upon a specific set of criteria and/or
expertise that has been acquired in a specific
knowledge area, application area, or product area, a
particular discipline, an industry, etc. Such expertise
may be provided by any group or person with
specialized education, knowledge, skill, experience, or
training.[1]. This knowledge base can be provided by a
member of the project team, or multiple members of
the project team, or by a team leader or team leaders.
However, typically expert judgment requires an
expertise that is not present within the project team
and, as such, it is common for an external group or
person with a specific relevant skill set or knowledge
base to be brought in for a consultation,
Delphi cost estimation
Delphi Method is a structured communication technique,
originally developed as a systematic, interactive forecasting
method which relies on a panel of experts. The experts answer
questionnaires in two or more rounds. After each round, a
facilitator provides an anonymous summary of the experts’
forecasts from the previous round with the reasons for their
judgments. Experts are then encouraged to revise their earlier
answers in light of the replies of other members of the panel.
It is believed that during this process the range of answers will
decrease and the group will converge towards the "correct"
answer. Finally, the process is stopped after a predefined stop
criterion (e.g. number of rounds, achievement of consensus,
and stability of results) and the mean or median scores of the
final rounds determine the results.
Delphi Method was developed in the 1950-1960s at the RAND
Corporation.
COCOMO(Constructive cost model)
Cocomo (Constructive Cost Model) is a regression model based on
LOC, i.e number of Lines of Code. It is a procedural cost estimate
model for software projects and often used as a process of reliably
predicting the various parameters associated with making a project such
as size, effort, cost, time and quality. It was proposed by Barry Boehm in
1970 and is based on the study of 63 projects, which make it one of the
best-documented models.
The key parameters which define the quality of any software products,
which are also an outcome of the Cocomo are primarily Effort &
Schedule:
Effort: Amount of labor that will be required to complete a task. It is
measured in person-months units.
Schedule: Simply means the amount of time required for the completion
of the job, which is, of course, proportional to the effort put. It is
measured in the units of time such as weeks, months.
Basic COCOMO
Basic COCOMO computes software development effort (and
cost) as a function of program size. Program size is expressed
in estimated thousands of lines of code (KLOC) COCOMO
applies to three classes of software projects:
• Organic projects - "small" teams with "good" experience
working with "less than rigid" requirements
• Semi-detached projects - "medium" teams with mixed
experience working with a mix of rigid and less than rigid
requirements
• Embedded projects - developed within a set of "tight"
constraints (hardware, software, operational, ......)
The basic COCOMO equations take the form
Effort Applied = ab (KLOC)b b [ man-months ]
Development Time = cb (Effort Applied)d b [months]
People required = Effort Applied / Development Time [count]
The coefficients a b , b b , c b and d b are given
in the following table.
33
kinds of risk
Schedule Risk:
Project schedule get slip when project tasks and
schedule release risks are not addressed properly.
Schedule risks mainly affect a project and finally on
company economy and may lead to project failure.
Schedules often slip due to the following
reasons:
Wrong time estimation
Resources are not tracked properly. All resources like
staff, systems, skills of individuals etc.
Failure to identify complex functionalities and time
required to develop those functionalities.
Unexpected project scope expansions
Budget Risk:
Wrong budget estimation.
Cost overruns
Project scope expansion
Operational Risks:
Risks of loss due to improper process implementation
failed system or some external events risks.
Causes of Operational risks:
Failure to address priority conflicts
Failure to resolve the responsibilities
Insufficient resources
No proper subject training
No resource planning
No communication in the team.
Technical risks:
Technical risks generally lead to failure of functionality and
performance.
Causes of technical risks are:
Continuous changing requirements
No advanced technology available or the existing technology is in
initial stages.
The product is complex to implement.
Difficult project modules integration.
Programmatic Risks:
These are the external risks beyond the operational limits. These are
all uncertain risks are outside the control of the program.
These external events can be:
Running out of the fund.
Market development
Changing customer product strategy and priority
Government rule changes.
Risk Assessment
Risk assessment is a term used to describe the overall process
or method where you:
Identify hazards and risk factors that have the potential to
cause harm (hazard identification).
Analyze and evaluate the risk associated with that hazard (risk
analysis, and risk evaluation).
Determine appropriate ways to eliminate the hazard, or control
the risk when the hazard cannot be eliminated (risk control).
A risk assessment is a thorough look at your workplace to
identify those things, situations, processes, etc. that may
cause harm, particularly to people. After identification is made,
you analyze and evaluate how likely and severe the risk is.
When this determination is made, you can next, decide what
measures should be in place to effectively eliminate or control
the harm from happening.
Risk Identification
It is a systematic attempt to specify threats to the project plans.
1. Generic risks: These risks are a potential threat to each software project.
2. Product-specific risks: These risks are recognized by those with a clear
understanding of the technology, the people and the environment which is
specific to the software that is to be built.
A method for recognizing risks is to create item checklist.
The checklist is used for risk identification and focus is at the subset of
known and predictable risk in the following categories:
1. Product size
2. Business impact
3. Customer characteristic
4. Process definition
5. Development environment
6. Technology to be built
7. staff size and experience
Risk Analysis
Software Risk analysis is a very important aspect of risk management.
In this phase the risk is identified and then categorized. After the
categorization of risk, the level, likelihood (percentage) and impact of
the risk is analyzed. Likelihood is defined in percentage after
examining what are the chances of risk to occur due to various
technical conditions.
These technical conditions can be:
Complexity of the technology
Technical knowledge possessed by the testing team
Conflicts within the team
Teams being distributed over a large geographical area
Usage of poor quality testing tools
With impact we mean the consequence of a risk in case it happens. It
is important to know about the impact because it is necessary to know
how a business can get affected:
What will be the loss to the customer
How would the business suffer
Loss of reputation or harm to society
Monetary losses
Legal actions against the company
Cancellation of business license
Level of risk is identified with the help of: