0% found this document useful (0 votes)
14 views

Software Metrices

Uploaded by

talibhassan1122
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Software Metrices

Uploaded by

talibhassan1122
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 86

Software Metrics

• Software Metrics
• A software metric is a measure of software characteristics which
are measurable or countable. Software metrics are valuable for
many reasons, including measuring software performance,
planning work items, measuring productivity, and many other
uses.
• Within the software development process, many metrics are
that are all connected. Software metrics are similar to the four
functions of management: Planning, Organization, Control, or
Improvement.
Classification of Software Metrics
• Software metrics can be classified into two types as follows:
• 1. Product Metrics: These are the measures of various characteristics
of the software product. The two important software characteristics
are:
1.Size and complexity of software.
2.Quality and reliability of software.
• These metrics can be computed for different stages of SDLC.
• Types of Metrics
• Internal metrics: Internal metrics are the metrics used for measuring
properties that are viewed to be of greater importance to a software
developer. For example, Lines of Code (LOC) measure.
• External metrics: External metrics are the metrics used for measuring
properties that are viewed to be of greater importance to the user,
e.g., portability, reliability, functionality, usability, etc.
• Hybrid metrics: Hybrid metrics are the metrics that combine product,
process, and resource metrics. For example, cost per FP where FP
stands for Function Point Metric.
• Project metrics: Project metrics are the metrics used by the project
manager to check the project's progress. Data from the past projects
are used to collect various metrics, like time and cost; these estimates
are used as a base of new software.
• As the project proceeds, the project manager will check its progress
from time-to-time and will compare the effort, cost, and time with
the original effort, cost and time. Also understand that these metrics
are used to decrease the development costs, time efforts and risks.
• The project quality can also be improved. As quality improves, the
number of errors and time, as well as cost required, is also reduced.
• Advantage of Software Metrics
• Comparative study of various design methodology of software systems.
• For analysis, comparison, and critical study of different programming
language concerning their characteristics.
• In comparing and evaluating the capabilities and productivity of people
involved in software development.
• In the preparation of software quality specifications.
• In the verification of compliance of software systems requirements and
specifications.
• In making inference about the effort to be put in the design and
development of the software systems.
• In getting an idea about the complexity of the code.
• Disadvantage of Software Metrics
• The application of software metrics is not always easy, and in some
cases, it is difficult and costly.
• The verification and justification of software metrics are based on
historical/empirical data whose validity is difficult to verify.
• These are useful for managing software products but not for evaluating
the performance of the technical staff.
• The definition and derivation of Software metrics are usually based on
assuming which are not standardized and may depend upon tools
available and working environment.
• Most of the predictive models rely on estimates of certain variables
which are often not known precisely.
Size Oriented Metrics
• LOC Metrics
• It is one of the earliest and simpler metrics for calculating the size of
the computer program. It is generally used in calculating and
comparing the productivity of programmers. These metrics are
derived by normalizing the quality and productivity measures by
considering the size of the product as a metric.
• Following are the points regarding LOC measures:
1.In size-oriented metrics, LOC is considered to be the normalization value.
2.It is an older method that was developed when FORTRAN and COBOL
programming were very popular.
3.Productivity is defined as KLOC / EFFORT, where effort is measured in person-
months.
4.Size-oriented metrics depend on the programming language used.
5.As productivity depends on KLOC, so assembly language code will have more
productivity.
6.LOC measure requires a level of detail which may not be practically achievable.
7.The more expressive is the programming language, the lower is the productivity.
8.LOC method of measurement does not apply to projects that deal with visual
(GUI-based) programming. As already explained, Graphical User Interfaces
(GUIs) use forms basically. LOC metric is not applicable here.
9.It requires that all organizations must use the same method for counting LOC.
This is so because some organizations use only executable statements, some
useful comments, and some do not. Thus, the standard needs to be established.
10.These metrics are not universally accepted.
Based on the LOC/KLOC count of software, many other metrics can be
computed:

• Errors/KLOC.
• $/ KLOC.
• Defects/KLOC.
• Pages of documentation/KLOC.
• Errors/PM.
• Productivity = KLOC/PM (effort is measured in person-months).
• $/ Page of documentation.
Advantages of LOC
• Simple to measure
Disadvantage of LOC
• It is defined on the code. For example, it cannot measure the size of
the specification.
• It characterizes only one specific view of size, namely length, it takes
no account of functionality or complexity
• Bad software design may cause an excessive line of code
• It is language dependent
• Users cannot easily understand it
Halstead's Software Metrics
• According to Halstead's "A computer program is an implementation of an
algorithm considered to be a collection of tokens which can be classified as
either operators or operand."
• Token Count
• In these metrics, a computer program is considered to be a collection of
tokens, which may be classified as either operators or operands. All
software science metrics can be defined in terms of these basic symbols.
These symbols are called as a token.
• The basic measures are
• n1 = count of unique operators.
• n2 = count of unique operands.
• N1 = count of total occurrences of operators.
• N2 = count of total occurrence of operands.
• In terms of the total tokens used, the size of the program can be
expressed as N = N1 + N2.
Halstead metrics are:
• Program Volume (V)
• The unit of measurement of volume is the standard unit for size
"bits." It is the actual size of a program if a uniform binary encoding
for the vocabulary is used.
• Program Level (L)
• The value of L ranges between zero and one, with L=1
representing a program written at the highest possible level
(i.e., with minimum size).

• Program Difficulty
• The difficulty level or error-proneness (D) of the program is
proportional to the number of the unique operator in the program.
D= (n1/2) * (N2/n2)
• Programming Effort (E)
• The unit of measurement of E is elementary mental discriminations.
E=V/L=D*V

• Estimated Program Length


• According to Halstead, The first Hypothesis of software science is that
the length of a well-structured program is a function only of the
number of unique operators and operands.
N=N1+N2
• And estimated program length is denoted by N^
• The following alternate expressions have been published to
estimate program length
• Size of Vocabulary (n)
• The size of the vocabulary of a program, which consists of the number of unique
tokens used to build a program, is defined as:
n=n1+n2
where
• n=vocabulary of a program
• n1=number of unique operators
• n2=number of unique operands
• Language Level - Shows the algorithm implementation program language level.
The same algorithm demands additional effort if it is written in a low-level
program language. For example, it is easier to program in Pascal than in
Assembler.
Language Language level λ Variance σ

PL/1 1.53 0.92

ALGOL 1.21 0.74

FORTRAN 1.14 0.81

CDC Assembly 0.88 0.42

PASCAL 2.54 -

APL 2.42 -

C 0.857 0.445
• Counting rules for C language
1.Comments are not considered.
2.The identifier and function declarations are not considered
3.All the variables and constants are considered operands.
4.Global variables used in different modules of the same program are
counted as multiple occurrences of the same variable.
5.Local variables with the same name in different functions are
counted as unique operands.
6.Functions calls are considered as operators.
7.All looping statements e.g., do {...} while ( ), while ( ) {...}, for ( ) {...},
all control statements e.g., if ( ) {...}, if ( ) {...} else {...}, etc. are
considered as operators.
8.In control construct switch ( ) {case:...}, switch as well as all the case
statements are considered as operators.
9. The reserve words like return, default, continue, break, sizeof, etc., are
considered as operators.
10. All the brackets, commas, and terminators are considered as operators.
11. GOTO is counted as an operator, and the label is counted as an operand.
12. The unary and binary occurrence of "+" and "-" are dealt with separately.
Similarly "*" (multiplication operator) are dealt separately.
13. In the array variables such as "array-name [index]" "array-name" and
"index" are considered as operands and [ ] is considered an operator.
14. In the structure variables such as "struct-name, member-name" or
"struct-name -> member-name," struct-name, member-name are
considered as operands and '.', '->' are taken as operators. Some names
of member elements in different structure variables are counted as
unique operands.
15. All the hash directive is ignored.
Example: Consider the sorting program as shown in fig: List out the
operators and operands and also calculate the value of software
science measure like n, N, V, E, λ ,etc.
Operators Occurrences Operands Occurrences

int 4 SORT 1
() 5 x 7
, 4 n 3
[] 7 i 8
if 2 j 7
< 2 save 3
; 11 im1 3
for 2 2 2
= 6 1 3
- 1 0 1
<= 2 - -
++ 2 - -
return 2 - -
{} 3 - -
n1=14 N1=53 n2=10 N2=38
• Functional Point (FP) Analysis
• Allan J. Albrecht initially developed function Point Analysis in
1979 at IBM and it has been further modified by the
International Function Point Users Group (IFPUG).
• FPA is used to make estimate of the software project, including
its testing in terms of functionality or function size of the
software product.
• Functional point analysis may be used for the test estimation of
the product. The functional size of the product is measured in
terms of the function point, which is a standard of measurement
to measure the software application.
• Objectives of FPA
• The basic and primary purpose of the functional point analysis
is to measure and provide the software application functional
size to the client, customer, and the stakeholder on their
request.
• It is used to measure the software project development along
with its maintenance, consistently throughout the project
irrespective of the tools and the technologies.
• Following are the points regarding FPs
1. FPs of an application is found out by counting the number and
types of functions used in the applications. Various functions used
in an application can be put under five types, as shown in Table:
Measurements Parameters Examples

1.Number of External Inputs(EI) Input screen and tables

2. Number of External Output (EO) Output screens and reports

3. Number of external inquiries (EQ) Prompts and interrupts.

4. Number of internal files (ILF) Databases and directories

5. Number of external interfaces Shared databases and shared


(EIF) routines.
• The FPA functional units are shown in Fig:
2. FP characterizes the complexity of the software system and
hence can be used to depict the project time and the manpower
requirement.
3. The effort required to develop the project depends on what the
software does.
4. FP is programming language independent.
5. FP method is used for data processing systems, business
systems like information systems.
6. The five parameters mentioned above are also known as
information domain characteristics.
7. All the parameters mentioned above are assigned some
weights that have been experimentally determined and are
shown in Table
• Weights of 5-FP Attributes
Measurement Parameter Low Average High
1. Number of external inputs (EI) 7 10 15
2. Number of external outputs (EO) 5 7 10
3. Number of external inquiries (EQ) 3 4 6

4. Number of internal files (ILF) 4 5 7


5. Number of external interfaces (EIF) 3 4 6

The functional complexities are multiplied with the corresponding weights against
each function, and the values are added up to determine the UFP (Unadjusted
Function Point) of the subsystem.
Here that weighing factor will be simple, average, or complex for a measurement parameter type.
The Function Point (FP) is thus calculated with the following formula.
and ∑(fi) is the sum of all 14 questionnaires and show the complexity adjustment
value/ factor-CAF (where i ranges from 1 to 14). Usually, a student is provided with the
value of ∑(fi)
Also note that ∑(fi) ranges from 0 to 70, i.e.,
• 0 <= ∑(fi) <=70
• and CAF ranges from 0.65 to 1.35 because
1.When ∑(fi) = 0 then CAF = 0.65
2.When ∑(fi) = 70 then CAF = 0.65 + (0.01 * 70) = 0.65 + 0.7 = 1.35
• Based on the FP measure of software many other metrics can be computed:
1.Errors/FP
2.$/FP.
3.Defects/FP
4.Pages of documentation/FP
5.Errors/PM.
6.Productivity = FP/PM (effort is measured in person-months).
7.$/Page of Documentation.
• 8. LOCs of an application can be estimated from FPs. That is, they are
interconvertible. This process is known as backfiring. For example, 1
FP is equal to about 100 lines of COBOL code.
• 9. FP metrics is used mostly for measuring the size of Management
Information System (MIS) software.
• 10. But the function points obtained above are unadjusted function points
(UFPs). These (UFPs) of a subsystem are further adjusted by
considering some more General System Characteristics (GSCs). It is a
set of 14 GSCs that need to be considered. The procedure for adjusting
UFPs is as follows:
1.Degree of Influence (DI) for each of these 14 GSCs is assessed on a
scale of 0 to 5. (b) If a particular GSC has no influence, then its weight is
taken as 0 and if it has a strong influence then its weight is 5.
2.The score of all 14 GSCs is totaled to determine Total Degree of
Influence (TDI).
3.Then Value Adjustment Factor (VAF) is computed from TDI by using the
formula: VAF = (TDI * 0.01) + 0.65
• The value of VAF lies within 0.65 to 1.35 because

• When TDI = 0, VAF = 0.65


• When TDI = 70, VAF = 1.35
• VAF is then multiplied with the UFP to get the
final FP count: FP = VAF * UFP
Example: Compute the function point, productivity, documentation,
cost per function for the following data:
1.Number of user inputs = 24
2.Number of user outputs = 46
3.Number of inquiries = 8
4.Number of files = 4
5.Number of external interfaces = 2
6.Effort = 36.9 p-m
7.Technical documents = 265 pages
8.User documents = 122 pages
9.Cost = $7744/ month
Various processing complexity factors are: 4, 1, 0, 3, 3, 5, 4, 4, 3, 3, 2,
2, 4, 5.
Measurement Parameter Count Weighing factor

1. Number of external inputs (EI) 24 * 4 = 96

2. Number of external outputs (EO) 46 * 4 = 184

3. Number of external inquiries (EQ) 8 * 6 = 48

4. Number of internal files (ILF) 4 * 10 = 40

5. Number of external interfaces (EIF) Count-total 2 * 5 = 10


→ 378
So sum of all fi (i ← 1 to 14) = 4 + 1 + 0 + 3 + 5 + 4 + 4 + 3 + 3 + 2 + 2 + 4 +
5 = 43

FP = Count-total * [0.65 + 0.01 *∑(fi)]


= 378 * [0.65 + 0.01 * 43]
= 378 * [0.65 + 0.43]
= 378 * 1.08 = 408

Functional Point (FP) Analysis


Total pages of documentation = technical document + user document
= 265 + 122 = 387pages
• Documentation = Pages of documentation/FP
= 387/408 = 0.94

• Functional Point (FP) Analysis


• Extended Function Point (EFP) Metrics
• FP metric has been further extended to compute:
• Feature points.
• 3D function points.
• Feature Points
• Feature point is the superset of function point measure that can be applied to
systems and engineering software applications.
• The feature points are used in those applications in which the algorithmic
complexity is high like real-time systems where time constraints are there,
embedded systems, etc.
• Feature points are computed by counting the information domain values and
are weighed by only single weight.
• Feature point includes another measurement parameter-ALGORITHM.
• The table for the computation of feature point is as follows:
Feature Point Calculations
Measurement Parameter Count Weighing
factor
1. Number of external inputs (EI) - * 4 -
2. Number of external outputs (EO) - * 5 -
3. Number of external inquiries (EQ) - * 4 -
4. Number of internal files (ILF) - * 7 -
5. Number of external interfaces (EIF) - * 7 -
6.Algorithms used Count total → - * 3 -

The feature point is thus calculated with the following formula:

FP = Count-total * [0.65 + 0.01 *∑(fi)]


= Count-total * CAF
• where count-total is obtained from the above table.
CAF = [0.65 + 0.01 * ∑(fi)]
• and ∑(fi) is the sum of all 14 questionnaires and show the
complexity adjustment value/factor-CAF (where i ranges from 1
to 14). Usually, a student is provided with the value of ∑(fi) .
• Function point and feature point both represent systems
functionality only.
• For real-time applications that are very complex, the feature
point is between 20 and 35% higher than the count determined
using function point above.
• 3D function points
• Three dimensions may be used to represent 3D function points
data dimension, functional dimension, and control dimension.
• The data dimension is evaluated as FPs are calculated. Here,
counts are made for inputs, outputs, inquiries, external
interfaces, and files.
• The functional dimension adds another feature-
Transformation, that is, the sequence of steps which
transforms input to output.
• The control dimension that adds another feature-
Transition that is defined as the total number of transitions
between states. A state represents some externally observable
mode
Example: Compute the 3D-function point value for an embedded system with the
following characteristics:
• Internal data structures = 6
• External data structures = 3
• No. of user inputs = 12
• No. of user outputs = 60
• No. of user inquiries = 9
• No. of external interfaces = 3
• Transformations = 36
• Transitions = 24
• Assume complexity of the above counts is high.
• Information Flow Metrics
• The other set of metrics we would live to consider are known as
Information Flow Metrics. The basis of information flow metrics is
found upon the following concept the simplest system consists of the
component, and it is the work that these components do and how
they are fitted together that identify the complexity of the system.
The following are the working definitions that are used in Information
flow:
• Component: Any element identified by decomposing a (software)
system into it's constituent's parts.
• Cohesion: The degree to which a component performs a single
function.
• Coupling: The term used to describe the degree of linkage between
one component to others in the same system.
• Information Flow metrics deal with this type of complexity by observing
the flow of information among system components or modules. This
metrics is given by Henry and Kafura. So it is also known as Henry and
Kafura's Metric.
• This metrics is based on the measurement of the information flow among
system modules. It is sensitive to the complexity due to interconnection
among system component. This measure includes the complexity of a
software module is defined to be the sum of complexities of the
procedures included in the module. A process contributes complexity due
to the following two factors.
• The complexity of the procedure code itself.
• The complexity due to the procedure's connections to its environment. The
effect of the first factor has been included through LOC (Line Of Code)
measure. For the quantification of the second factor, Henry and Kafura
have defined two terms, namely FAN-IN and FAN-OUT.
• FAN-IN: FAN-IN of a procedure is the number of local flows into that
procedure plus the number of data structures from which this
procedure retrieve information.
• FAN -OUT: FAN-OUT is the number of local flows from that procedure
plus the number of data structures which that procedure updates.
• Procedure Complexity = Length * (FAN-IN * FANOUT)**2
• Cyclomatic Complexity
• Cyclomatic complexity is a software metric used to measure the
complexity of a program.
• Thomas J. McCabe developed this metric in 1976.McCabe interprets a
computer program as a set of a strongly connected directed graph.
• Nodes represent parts of the source code having no branches and
arcs represent possible control flow transfers during program
execution.
• The notion of program graph has been used for this measure, and it is
used to measure and control the number of paths through a program.
The complexity of a computer program can be correlated with the
topological complexity of a graph.
• How to Calculate Cyclomatic Complexity?
• McCabe proposed the cyclomatic number, V (G) of graph theory
as an indicator of software complexity. The cyclomatic number
is equal to the number of linearly independent paths through a
program in its graphs representation. For a program control
graph G, cyclomatic number, V (G), is given as:
• V (G) = E - N + 2 * P
• E = The number of edges in graphs G
• N = The number of nodes in graphs G
• P = The number of connected components in graph G.
Properties of Cyclomatic complexity:
• Following are the properties of Cyclomatic complexity:

• V (G) is the maximum number of independent paths in the graph


V (G) >=1
• G will have one path if V (G) = 1
• Minimize complexity to 10
Software Project Planning
• A Software Project is the complete methodology of
programming advancement from requirement gathering to
testing and support, completed by the execution procedures, in
a specified period to achieve intended software product.
Software Project Manager
• Software manager is responsible for planning and scheduling
project development. They manage the work to ensure that it is
completed to the required standard. They monitor the progress
to check that the event is on time and within budget.
• The project planning must incorporate the major issues like size
& cost estimation scheduling, project monitoring, personnel
selection evaluation & risk management.
• To plan a successful software project, we must understand:
• Scope of work to be completed
• Risk analysis
• The resources mandatory
• The project to be accomplished
• Record of being followed
• Software Project planning starts before technical work start. The
various steps of planning activities are:
• The size is the crucial parameter for the
estimation of other activities.
• Resources requirement are required based
on cost and development time.
• Project schedule may prove to be very useful
for controlling and monitoring the progress
of the project. This is dependent on
resources & development time.
• Software Cost Estimation
• For any new software project, it is necessary to know how much it
will cost to develop and how much development time will it take.
• These estimates are needed before development is initiated, but how
is this done?
• Project scope must be established in advanced.
• Software metrics are used as a support from which evaluation is made.
• The project is broken into small PCs which are estimated individually.
• To achieve true cost & schedule estimate, several option arise.
• Delay estimation
• Used symbol decomposition techniques to generate project cost and schedule
estimates.
• Acquire one or more automated estimation tools.
Uses of Cost Estimation
• During the planning stage, one needs to choose how many engineers
are required for the project and to develop a schedule.
• In monitoring the project's progress, one needs to access whether the
project is progressing according to the procedure and takes corrective
action, if necessary.
• Cost Estimation Models
• A model may be static or dynamic. In a static model, a single
variable is taken as a key element for calculating cost and time.
• In a dynamic model, all variable are interdependent, and there
is no basic variable.
• Static, Single Variable Models: When a model makes use of single
variables to calculate desired values such as cost, time, efforts, etc.
is said to be a single variable model. The most common equation is:

• Where C = Costs
L= size
a and b are constants
• The Software Engineering Laboratory established a model called
SEL model, for estimating its software production.
• This model is an example of the static, single variable model.
• Static, Multivariable Models: These models are based on method
(1), they depend on several variables describing various aspects of
the software development environment.
• In some model, several variables are needed to describe the
software development process, and selected equation combined
these variables to give the estimate of time & cost. These models
are called multivariable models.
WALSTON and FELIX develop the models at IBM provide the following equation
gives a relationship between lines of source code and effort:
Example: Compare the Walston-Felix Model with the SEL model on a software development
expected to involve 8 person-years of effort.
1. Calculate the number of lines of source code that can be produced.
2. Calculate the duration of the development.
3. Calculate the productivity in LOC/PY
4. Calculate the average manning

The amount of manpower involved = 8PY=96persons-months


(a)Number of lines of source code can be obtained by reversing equation to give:
• (b)Duration in months can be calculated by means of equation
COCOMO Model
• Boehm proposed COCOMO (Constructive Cost Estimation Model) in 1981.
• COCOMO is one of the most generally used software estimation models in the world.
• COCOMO predicts the efforts and schedule of a software product based on the size of the
software.
The necessary steps in this model are:
• Get an initial estimate of the development effort from evaluation of thousands of delivered lines
of source code (KDLOC).
• Determine a set of 15 multiplying factors from various attributes of the project.
• Calculate the effort estimate by multiplying the initial estimate with all the multiplying factors i.e.,
multiply the values in step1 and step2.
• The initial estimate (also called nominal estimate) is determined by an equation of the form used
in the static single variable models, using KDLOC as the measure of the size. To determine the
initial effort Ei in person-months the equation used is of the type is shown below
Ei=a*(KDLOC)b

• The value of the constant a and b are depends on the project type.
In COCOMO, projects are categorized into three types:
1.Organic: A development project can be treated of the organic type, if the project deals with developing a
well-understood application program, the size of the development team is reasonably small, and the team
members are experienced in developing similar methods of projects.
• Examples of this type of projects are simple business systems, simple inventory management systems,
and data processing systems.
2. Semidetached: A development project can be treated with semidetached type if the development
consists of a mixture of experienced and inexperienced staff. Team members may have finite experience in
related systems but may be unfamiliar with some aspects of the order being developed.
• Example of Semidetached system includes developing a new operating system (OS), a Database
Management System (DBMS), and complex inventory management system.
3. Embedded: A development project is treated to be of an embedded type, if the software being
developed is strongly coupled to complex hardware, or if the stringent regulations on the operational
method exist.
• For Example: ATM, Air Traffic control.
For three product categories, Bohem provides a different set of expression to predict effort (in a unit of
person month)and development time from the size of estimation in KLOC(Kilo Line of code) efforts
estimation takes into account the productivity loss due to holidays, weekly off, coffee breaks, etc.
• According to Boehm, software cost estimation should be done through three stages:
• Basic Model
• Intermediate Model
• Detailed Model
1. Basic COCOMO Model: The basic COCOMO model provide an accurate size of the project
parameters. The following expressions give the basic COCOMO estimation model:

Tdev is the estimated time to develop the software, expressed in months,


• Organic: Effort = 2.4(KLOC) 1.05 PM

• Semi-detached: Effort = 3.0(KLOC) 1.12 PM

• Embedded: Effort = 3.6(KLOC) 1.20 PM


2. Intermediate Model: The basic Cocomo model considers that the
effort is only a function of the number of lines of code and some
constants calculated according to the various software systems. The
intermediate COCOMO model recognizes these facts and refines the
initial estimates obtained through the basic COCOMO model by using a
set of 15 cost drivers based on various attributes of software
engineering.
• Classification of Cost Drivers and their attributes:
(i) Product attributes –
• Required software reliability extent
• Size of the application database
• The complexity of the product
Hardware attributes -
• Run-time performance constraints
• Memory constraints
• The volatility of the virtual machine environment
• Required turnabout time
Personnel attributes -
• Analyst capability
• Software engineering capability
• Applications experience
• Virtual machine experience
• Programming language experience
Project attributes -
• Use of software tools
• Application of software engineering methods
• Required development schedule
3. Detailed COCOMO Model: Detailed COCOMO incorporates all qualities of the
standard version with an assessment of the cost driver?s effect on each method of the
software engineering process. The detailed model uses various effort multipliers for
each cost driver property. In detailed cocomo, the whole software is differentiated into
multiple modules, and then we apply COCOMO in various modules to estimate effort
and then sum the effort.
The Six phases of detailed COCOMO are:
• Planning and requirements
• System structure
• Complete structure
• Module code and test
• Integration and test
• Cost Constructive model
• The effort is determined as a function of program estimate, and a set of cost drivers
are given according to every phase of the software lifecycle.
Putnam Resource Allocation Model
• The Lawrence Putnam model describes the time and effort requires
finishing a software project of a specified size.
• Putnam makes a use of a so-called The Norden/Rayleigh Curve to
estimate project effort, schedule & defect rate as shown in fig:
• Putnam noticed that software staffing profiles followed the well known Rayleigh distribution.
Putnam used his observation about productivity levels to derive the software equation:

K is the total effort expended (in PM) in product development, and L is the product estimate in KLOC .
• Putnam proposed that optimal staff develop on a project should follow the
Rayleigh curve.
• Only a small number of engineers are required at the beginning of a plan to
carry out planning and specification tasks.
• As the project progresses and more detailed work are necessary, the number
of engineers reaches a peak. After implementation and unit testing, the
number of project staff falls.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy