Boehm1995 COCOMO 2
Boehm1995 COCOMO 2
Boehm1995 COCOMO 2
COCOMO 2.0*
Ray Madachy
USC Center for Software Engineering and Litton Data Systems
Richard Selby
UC Irvine and Amadeus Software Research
Abstract
Current software cost estimation models, such as the 1981 Constructive Cost Model
(COCOMO) for software cost estimation and its 1987 Ada COCOMO update, have been
experiencing increasing difficulties in estimating the costs of software developed to new
life cycle processes and capabilities. These include non-sequential and rapid-development
process models; reuse-driven approaches involving commercial off the shelf (COTS)
packages, reengineering, applications composition, and applications generation
capabilities; object-oriented approaches supported by distributed middleware; and
software process maturity initiatives.
This paper summarizes research in deriving a baseline COCOMO 2.0 model tailored
to these new forms of software development, including rationales for the model
decisions. The major new modeling capabilities of COCOMO 2.0 are a tailorable family
of software sizing models, involving Object Points, Function Points, and Source Lines of
Code; nonlinear models for software reuse and reengineering; an exponent-driver
approach for modeling relative software diseconomies of scale; and several additions,
deletions, and updates to previous COCOMO effort-multiplier cost drivers. This model is
serving as a framework for an extensive current data collection and analysis effort to
further refine and calibrate the model’s estimation capabilities.
1. INTRODUCTION
1.1 Motivation
*
To appear in Annals of Software Engineering Special Volume on Software Process and Product
Measurement, J.D. Arthur and S.M. Henry Eds., J.C. Baltzer AG, Science Publishers, Amsterdam, The
Netherlands, 1995.
“We are becoming a software company,” is an increasingly-repeated phrase in
organizations as diverse as finance, transportation, aerospace, electronics, and
manufacturing firms. Competitive advantage is increasingly dependent on the
development of smart, tailorable products and services, and on the ability to develop and
adapt these products and services more rapidly than competitors' adaptation times.
Dramatic reductions in computer hardware platform costs, and the prevalence of
commodity software solutions have indirectly put downward pressure on systems
development costs. This situation makes cost-benefit calculations even more important in
selecting the correct components for construction and life cycle evolution of a system,
and in convincing skeptical financial management of the business case for software
investments. It also highlights the need for concurrent product and process determination,
and for the ability to conduct trade-off analyses among software and system life cycle
costs, cycle times, functions, performance, and qualities.
Concurrently, a new generation of software processes and products is changing the
way organizations develop software. These new approaches—evolutionary, risk-driven,
and collaborative software processes; fourth generation languages and application
generators; commercial off-the-shelf (COTS) and reuse-driven software approaches; fast-
track software development approaches; software process maturity initiatives—lead to
significant benefits in terms of improved software quality and reduced software cost, risk,
and cycle time.
However, although some of the existing software cost models have initiatives
addressing aspects of these issues, these new approaches have not been strongly matched
to date by complementary new models for estimating software costs and schedules. This
makes it difficult for organizations to conduct effective planning, analysis, and control of
projects using the new approaches.
These concerns have led the authors to formulate a new version of the Constructive
Cost Model (COCOMO) for software effort, cost, and schedule estimation. The original
COCOMO [Boehm 1981] and its specialized Ada COCOMO successor [Boehm and
Royce 1989] were reasonably well-matched to the classes of software project that they
modeled: largely custom, build-to-specification software [Miyazaki and Mori 1985,
Boehm 1985, Goudy 1987]. Although Ada COCOMO added a capability for estimating
the costs and schedules for incremental software development, COCOMO encountered
increasing difficulty in estimating the costs of business software [Kemerer 1987, Ruhl
and Gunn 1991], of object-oriented software [Pfleeger 1991], of software created via
spiral or evolutionary development models, or of software developed largely via
commercial-off-the-shelf (COTS) applications-composition capabilities.
These objectives support the primary needs expressed by software cost estimation
users in a recent Software Engineering Institute survey [Park et al. 1994]. In priority
order, these needs were for support of project planning and scheduling, project staffing,
estimates-to-complete, project preparation, replanning and rescheduling, project tracking,
contract negotiation, proposal evaluation, resource leveling, concept exploration, design
evaluation, and bid/no-bid decisions. For each of these needs, COCOMO 2.0 will provide
more up-to-date support than its COCOMO and Ada COCOMO predecessors.
†
These figures are judgement-based extensions of the Bureau of Labor Statistics moderate-growth labor
distribution scenario for the year 2005 [CSTB 1993; Silvestri and Lukaseiwicz 1991]. The 55 million End-
User programming figure was obtained by applying judgement based extrapolations of the 1989 Bureau of
the Census data on computer usage fractions by occupation [Kominski 1991] to generate end-user
programming fractions by occupation category. These were then applied to the 2005 occupation-category
End-User Programming
(55M performers in US)
Infrastructure
(0.75M)
Performers in the three intermediate sectors in Figure 1 will need to know a good deal
about computer science-intensive Infrastructure software and also one or more
populations (e.g., 10% of the 25M people in “Service Occupations”; 40% of the 17M people in “Marketing
and Sales Occupations”). The 2005 total of 2.75 M software practitioners was obtained by applying a factor
of 1.6 to the number of people traditionally identified as “Systems Analysts and Computer Scientists”
(0.829M in 2005) and “Computer Programmers (0.882M). The expansion factor of 1.6 to cover software
personnel with other job titles is based on the results of a 1983 survey on this topic [Boehm 1983].The
2005 distribution of the 2.75 M software developers is a judgement-based extrapolation of current trends.
applications domains. Creating this talent pool is a major national challenge.
The Application Generators sector will create largely prepackaged capabilities for
user programming. Typical firms operating in this sector are Microsoft, Lotus, Novell,
Borland, and vendors of computer-aided planning, engineering, manufacturing, and
financial analysis systems. Their product lines will have many reusable components, but
also will require a good deal of new-capability development from scratch. Application
Composition Aids will be developed both by the firms above and by software product-
line investments of firms in the Application Composition sector.
The Application Composition sector deals with applications which are too diversified
to be handled by prepackaged solutions, but which are sufficiently simple to be rapidly
composable from interoperable components. Typical components will be graphic user
interface (GUI) builders, database or object managers, middleware for distributed
processing or transaction processing, hypermedia handlers, smart data finders, and
domain-specific components such as financial, medical, or industrial process control
packages.
Most large firms will have groups to compose such applications, but a great many
specialized software firms will provide composed applications on contract. These range
from large, versatile firms such as Andersen Consulting and EDS, to small firms
specializing in such specialty areas as decision support or transaction processing, or in
such applications domains as finance or manufacturing.
The Systems Integration sector deals with large scale, highly embedded, or
unprecedented systems. Portions of these systems can be developed with Application
Composition capabilities, but their demands generally require a significant amount of up-
front systems engineering and custom software development. Aerospace firms operate
within this sector, as do major system integration firms such as EDS and Andersen
Consulting, large firms developing software-intensive products and services
(telecommunications, automotive, financial, and electronic products firms), and firms
developing large-scale corporate information systems or manufacturing support systems.
COCOMO 2.0 follows the openness principles used in the original COCOMO. Thus,
all of its relationships and algorithms will be publicly available. Also, all of its interfaces
are designed to be public, well-defined, and parametrized, so that complementary
preprocessors (analogy, case-based, or other size estimation models), post-processors
(project planning and control tools, project dynamics models, risk analyzers), and higher
level packages (project management packages, product negotiation aids), can be
combined straightforwardly with COCOMO 2.0.
To support the software marketplace sectors above, COCOMO 2.0 provides a family
of increasingly detailed software cost estimation models, each tuned to the sectors' needs
and type of information available to support software cost estimation.
Third, given the situation in premises 1 and 2, COCOMO 2.0 enables projects to
furnish coarse grained cost driver information in the early project stages, and increasingly
fine-grained information in later stages. Consequently, COCOMO 2.0 does not produce
point estimates of software cost and effort, but rather range estimates tied to the degree of
definition of the estimation inputs. The uncertainty ranges in Figure 2 are used as starting
points for these estimation ranges.
With respect to process strategy, Application Generator, System Integration, and
Infrastructure software projects will involve a mix of three major process models. The
appropriate sequencing of these models will depend on the project’s marketplace drivers
‡
These seven projects implemented the same algorithmic version of the Intermediate COCOMO cost
model, but with the use of different interpretations of the other product specifications: produce a “friendly
user inter-face” with a “single-user file system.”
and degree of product understanding.
The Application Composition model involves prototyping efforts to resolve potential
high-risk issues such as user interfaces, software/system interaction, performance, or
technology maturity. The costs of this type of effort are best estimated by the
Applications Composition model.
The Early Design model involves exploration of alternative software/system
architectures and concepts of operation. At this stage, not enough is generally known to
support fine-grain cost estimation. The corresponding COCOMO 2.0 capability involves
the use of function points and a small number of additional cost drivers.
The Post-Architecture model involves the actual development and maintenance of a
software product. This model proceeds most cost-effectively if a software life-cycle
architecture has been developed; validated with respect to the system's mission, concept
of operation, and risk; and established as the framework for the product. The
corresponding COCOMO 2.0 model has about the same granularity as the previous
COCOMO and Ada COCOMO models. It uses source instructions and / or function
points for sizing, with modifiers for reuse and software breakage; a set of 17
multiplicative cost drivers; and a set of 5 factors determining the project's scaling
exponent. These factors replace the development modes (Organic, Semidetached, or
Embedded) in the original COCOMO model, and refine the four exponent-scaling factors
in Ada COCOMO.
To summarize, COCOMO 2.0 provides the following three-model series for
estimation of Application Generator, System Integration, and Infrastructure software
projects:
1. The earliest phases or spiral cycles will generally involve prototyping, using
Application Composition capabilities. The COCOMO 2.0 Application
Composition model supports these phases, and any other prototyping activities
occurring later in the life cycle.
2. The next phases or spiral cycles will generally involve exploration of
architectural alternatives or incremental development strategies. To support
these activities, COCOMO 2.0 provides an early estimation model. This uses
function points for sizing, and a coarse-grained set of 5 cost drivers (e.g., two
cost drivers for Personnel Capability and Personnel Experience in place of the
6 current Post-Architecture model cost drivers covering various aspects of
personnel capability, continuity and experience). Again, this level of detail is
consistent with the general level of information available and the general level
of estimation accuracy needed at this stage.
3. Once the project is ready to develop and sustain a fielded system, it should
have a life-cycle architecture, which provides more accurate information on
cost driver inputs, and enables more accurate cost estimates. To support this
stage of development, COCOMO 2.0 provides a model whose granularity is
roughly equivalent to the current COCOMO and Ada COCOMO models. It
can use either source lines of code or function points for a sizing parameter, a
refinement of the COCOMO development modes as a scaling factor, and 17
multiplicative cost drivers.
The above should be considered as current working hypotheses about the most
effective forms for COCOMO 2.0. They will be subject to revision based on subsequent
data analysis. Data analysis should also enable the further calibration of the relationships
between object points, function points, and source lines of code for various languages and
composition systems, enabling flexibility in the choice of sizing parameters.
Size Delivered Source Instructions DSI or SLOC Object Points Function Points (FP) and FP and Language or SLOC
(DSI) or Source Lines Of Code Language
(SLOC)
Reuse Equivalent SLOC = Equivalent SLOC = Implicit in model % unmodified reuse: SR Equivalent SLOC =
% modified reuse: nonlinear
Linear f(DM, CM, IM) Linear f(DM, CM, IM) nonlinear f(AA,SU,DM,CM,IM)
f(AA,SU,DM,CM,IM)
Breakage Requirements Volatility rating: RVOL rating Implicit in model Breakage %: BRAK BRAK
(RVOL)
Maintenance Annual Change Traffic (ACT) = ACT Object Point Reuse Reuse model Reuse model
%added + %modified Model
Scale (b) in Organic: 1.05 Embedded: 1.04 -1.24 1.01 - 1.26 depending on 1.01 -1.26 depending on the
depending on degree of: the degree of: degree of:
MM NOM = a(Size)b Semidetached: 1.12
• early risk elimination • precedentedness • precedentedness
Embedded: 1.20
• solid architecture • conformity • conformity
1.0
• stable requirements • early architecture, • early architecture, risk
resolution
• Ada process maturity risk resolution
• team cohesion
• team cohesion
• process maturity (SEI)
• process maturity (SEI)
Product Cost Drivers RELY, DATA, CPLX RELY * , DATA, CPLX * None RCPX *† , RUSE *† RELY, DATA, DOCU *†
, RUSE CPLX †, RUSE *†
Platform Cost Drivers TIME, STOR, VIRT,TURN TIME, STOR, VMVH, None Platform difficulty: PDIF TIME, STOR,
VMVT, TURN *† PVOL(=VIRT)
Personnel Cost Drivers ACAP, AEXP, PCAP, VEXP, ACAP * , AEXP, PCAP *, None Personnel capability and ACAP * , AEXP † , PCAP *
LEXP VEXP, LEXP * experience: PERS *†, , PEXP *†, LTEX *† ,
PREX *† PCON *†
Project Cost Drivers MODP, TOOL, SCED MODP * , TOOL * , None SCED, FCIL *† TOOL *† , SCED, SITE *†
SCED, SECU
* Different multipliers.
† Different rating scale
4.1.1 COCOMO 2.0 Object Point Estimation Procedure
Figure 3 presents the baseline COCOMO 2.0 Object Point procedure for estimating
the effort involved in Applications Composition and prototyping projects. It is a synthesis
of the procedure in Appendix B.3 of [Kauffman and Kumar 1993] and the productivity
data from the 19 project data points in [Banker et al. 1994].
Definitions of terms in Figure 3 are as follows:
• NOP: New Object Points (Object Point count adjusted for reuse)
• srvr: number of server (mainframe or equivalent) data tables used in
conjunction with the SCREEN or REPORT.
• clnt: number of client (personal workstation) data tables used in conjunction
with the SCREEN or REPORT.
• %reuse: the percentage of screens, reports, and 3GL modules reused from
previous applications, pro-rated by degree of reuse.
The productivity rates in Figure 3 are based on an analysis of the year-1 and year-2
project data in [Banker et al. 1994]. In year-1, the CASE tool was itself under
construction and the developers were new to its use. The average productivity of 7
NOP/person-month in the twelve year-1 projects is associated with the Low levels of
developer and ICASE maturity and capability in Figure 3. In the seven year-2 projects,
both the CASE tool and the developers’ capabilities were considerably more mature. The
average productivity was 25 NOP/person-month, corresponding with the High levels of
developer and ICASE maturity in Figure 3.
As another definitional point, note that the use of the term “object” in “Object Points”
defines screens, reports, and 3GL modules as objects. This may or may not have any
relationship to other definitions of “objects”, such as those possessing features such as
class affiliation, inheritance, encapsulation, message passing, and so forth. Counting rules
for “objects” of that nature, when used in languages such as C++, will be discussed under
“source lines of code” in the next section.
Step 3: Weigh the number in each cell using the following scheme. The weights reflect
the relative effort required to implement an instance of that complexity level.:
Complexity-Weight
Object Type
Simple Medium Difficult
Screen 1 2 3
Report 2 5 8
3GL Component 10
Step 4: Determine Object-Points: add all the weighted object instances to get one number,
the Object-Point count.
Step 5: Estimate percentage of reuse you expect to be achieved in this project. Compute
the New Object Points to be developed, NOP = (Object-Points) (100 - %reuse)/
100.
Step 6: Determine a productivity rate, PROD = NOP / person-month, from the following
scheme
Developers’ experience and capability Very Low Low Nominal High Very High
ICASE maturity and capability Very Low Low Nominal High Very High
PROD 4 7 13 25 50
External Input (Inputs) Count each unique user data or user control input type that (i) enters
the external boundary of the software system being measured and (ii)
adds or changes data in a logical internal file.
External Output (Outputs) Count each unique user data or control output type that leaves the
external boundary of the software system being measured.
Internal Logical File (Files) Count each major logical group of user data or control information in
the software system as a logical internal file type. Include each
logical file (e.g., each logical group of data) that is generated, used,
or maintained by the software system.
External Interface Files (Interfaces) Files passed or shared between software systems should be counted
as external interface file types within each system.
External Inquiry (Queries) Count each unique input-output combination, where an input causes
and generates an immediate output, as an external inquiry type.
Each instance of these function types is then classified by complexity level. The
complexity levels determine a set of weights, which are applied to their corresponding
function counts to determine the Unadjusted Function Points quantity. This is the
Function Point sizing metric used by COCOMO 2.0. The usual Function Point procedure
involves assessing the degree of influence (DI) of fourteen application characteristics on
the software project determined according to a rating scale of 0.0 to 0.05 for each
characteristic. The 14 ratings are added together, and added to a base level of 0.65 to
produce a general characteristics adjustment factor that ranges from 0.65 to 1.35.
Each of these fourteen characteristics, such as distributed functions, performance, and
reusability, thus have a maximum of 5% contribution to estimated effort. This is
inconsistent with COCOMO experience; thus COCOMO 2.0 uses Unadjusted Function
Points for sizing, and applies its reuse factors, cost driver effort multipliers, and exponent
scale factors to this sizing quantity. The COCOMO 2.0 procedure for determining
Unadjusted Function Points is shown in Figure 5.
Step 3: Apply complexity weights. Weight the number in each cell using the following
scheme. The weights reflect the relative value of the function to the user.
Complexity-Weight
Function Type
Low Average High
Internal Logical Files 7 10 15
External Interfaces Files 5 7 10
External Inputs 3 4 6
External Outputs 4 5 7
External Inquiries 3 4 6
Step 4: Compute Unadjusted Function Points. Add all the weighted functions counts to get
one number, the Unadjusted Function Points.
*.
Note: The word file refers to a logically related group of data and not the physical
implementation of those groups of data
Strong
Very low Reasonably modularity,
Moderately low
cohesion, high well-structured; High cohesion, information
Structure cohesion, high
coupling, some weak low coupling. hiding in data /
coupling.
spaghetti code. areas. control
structures.
Some Moderate Good Clear match
No match correlation correlation correlation between
Application
between program between between between program and
Clarity
and application program and program and program and application
world views. application. application. application. world-views.
Good code Self-descriptive
Some code Moderate level commentary code;
Obscure code;
commentary of code and headers; documentation
Self- documentation
and headers; commentary, useful up-to-date,
Descriptiveness missing, obscure
some useful headers, documentation; well-organized,
or obsolete
documentation. documentations. some weak with design
areas. rationale.
SU Increment to
50 40 30 20 10
AAF
The other nonlinear reuse increment deals with the degree of assessment and
assimilation needed to determine whether even a fully-reused software module is
appropriate to the application, and to integrate its description into the overall product
description. Table 4 provides the rating scale and values for the Assessment and
Assimilation increment AA. For software conversion, this factor extends the Conversion
Planning Increment in [Boehm 1981, p. 558].
Table 4: Rating Scale for Assessment and Assimilation Increment (AA)
0 None
2 Basic module search and documentation
4 Some module Test and Evaluation (T&E), documentation
6 Considerable module T&E, documentation
8 Extensive module T&E, documentation
4.4 Breakage
COCOMO 2.0 replaces the COCOMO Requirements Volatility effort multiplier and
the Ada COCOMO Requirements Volatility exponent driver by a breakage percentage,
BRAK, used to adjust the effective size of the product. Consider a project which delivers
100,000 instructions but discards the equivalent of an additional 20,000 instructions. This
project would have a BRAK value of 20, which would be used to adjust its effective size
to 120,000 instructions for COCOMO 2.0 estimation. The BRAK factor is not used in the
Applications Composition model, where a certain degree of product iteration is expected,
and included in the data calibration.
Software cost estimation models often have an exponential factor to account for the
relative economies or diseconomies of scale encountered as a software project increases
its size. This factor is generally represented as the exponent B in the equation:
B
Effort = A X (Size ) EQ 3.
If B < 1.0, the project exhibits economies of scale. If the product's size is doubled, the
project effort is less than doubled. The project's productivity increases as the product size
is increased. Some project economies of scale can be achieved via project-specific tools
(e.g., simulations, test-beds), but in general these are difficult to achieve. For small
projects, fixed startup costs such as tool tailoring and setup of standards and
administrative reports are often a source of economies of scale.
If B = 1.0, the economies and diseconomies of scale are in balance. This linear model
is often used for cost estimation of small projects. It is used for the COCOMO 2.0
Applications Composition model.
If B > 1.0, the project exhibits diseconomies of scale. This is generally due to two
main factors: growth of interpersonal communications overhead and growth of large-
system integration overhead. Larger projects will have more personnel, and thus more
interpersonal communications paths consuming overhead. Integrating a small product as
part of a larger product requires not only the effort to develop the small product, but also
the additional overhead effort to design, maintain, integrate, and test its interfaces with
the remainder of the product.
See [Banker et al 1994a] for a further discussion of software economies and
diseconomies of scale.
The COCOMO 2.0 value for the coefficient A in EQ 3 is provisionally set at 3.0
Initial calibration of COCOMO 2.0 to the original COCOMO project database [Boehm
1981, pp. 496-97] indicates that this is a reasonable starting point.
Thus, a 100 KSLOC project with Extra High (0) ratings for all factors will have ²Wi=
1.01
0, B = 1.01, and a relative effort E = 100 = 105 PM. A project with Very Low (5)
ratings for all factors will have ²Wi= 25, B = 1.26, and a relative effort E = 331 PM. This
represents a large variation, but the increase involved in a one-unit change in one of the
factors is only about 4.7%. Thus, this approach avoids the 40% swings involved in
choosing a development mode for a 100 KSLOC product in the original COCOMO.
Table 6: Rating Scheme for the COCOMO 2.0 Scale Factors
Scale Factors Very Low Low Nominal High Very High Extra High
(Wi) (5) (4) (3) (2) (1) (0)
basically
very difficult some difficult largely highly seamless
Team cohesion cooperative
interactions interactions cooperative cooperative interactions
interactions
Process maturity† Weighted average of “Yes” answers to CMM Maturity Questionnaire
PM adjusted = PM nominal × ( ∏ EM )
i EQ 5.
i
The primary selection and definition criteria for COCOMO 2.0 effort-multiplier cost
drivers were:
• Continuity. Unless there has been a strong rationale otherwise, the COCOMO
2.0 baseline rating scales and effort multipliers are consistent with those in
COCOMO and Ada COCOMO.
• Parsimony. Effort-multiplier cost drivers are included in the COCOMO 2.0
baseline model only if there has been a strong rationale that they would
independently explain a significant source of project effort or productivity
variation.
Table 7 summarizes the COCOMO 2.0 effort-multiplier cost drivers by the four
categories of Product, Platform, Personnel, and Project Factors. The superscripts
following the cost driver names indicated the differences between the COCOMO 2.0 cost
drivers and their counterparts in COCOMO and Ada COCOMO:
blank - No difference in rating scales or effort multipliers
* - Same rating scales, different effort multipliers
† - Different rating scales, different effort multipliers
Table 7 provides the COCOMO 2.0 effort multiplier rating scales. The following
subsections elaborate on the treatment of these effort-multiplier cost drivers, and discuss
those which have been dropped in COCOMO 2.0.
The effort range values can be used in the schedule equation, EQ 6., to determine
schedule range values.
8. Conclusions
Software development trends towards reuse, reengineering, commercial off-the shelf
(COTS) packages, object orientation, applications composition capabilities, non-
sequential process models, rapid development approaches, and distributed middleware
capabilities require new approaches to software cost estimation.
The wide variety of current and future software processes, and the variability of
information available to support software cost estimation, require a family of models to
achieve effective cost estimates.
The baseline COCOMO 2.0 family of software cost estimation models presented here
provides a tailorable cost estimation capability well matched to the major current and
likely future software process trends.
The baseline COCOMO 2.0 model effectively addresses its objectives of openness,
parsimony, and continuity from previous COCOMO models. It is currently serving as the
framework for an extensive data collection and analysis effort to further refine and
calibrate its estimation capabilities. Initial calibration of COCOMO 2.0 to the previous
COCOMO database indicates that its estimation accuracy is comparable to that of
original COCOMO’s for this sample.
9. Acronyms and Abbreviations
11. References
Amadeus (1994), Amadeus Measurement System User’s Guide, Version 2.3a, Amadeus
Software Research, Inc., Irvine, California, July 1994.
Banker, R., R. Kauffman and R. Kumar (1994), “An Empirical Test of Object-Based
Output Measurement Metrics in a Computer Aided Software Engineering (CASE)
Environment,” Journal of Management Information Systems (to appear, 1994).
Banker, R., H. Chang and C. Kemerer (1994a), “Evidence on Economies of Scale in
Software Development,” Information and Software Technology (to appear, 1994).
Behrens, C. (1983), “Measuring the Productivity of Computer Systems Development
Activities with Function Points,” IEEE Transactions on Software Engineering,
November 1983.
Boehm, B. (1981), Software Engineering Economics, Prentice Hall.
Boehm, B. (1983), “The Hardware/Software Cost Ratio: Is It a Myth?” Computer 16(3),
March 1983, pp. 78-80.
Boehm, B. (1985), “COCOMO: Answering the Most Frequent Questions,” In
Proceedings, First COCOMO Users’ Group Meeting, Wang Institute, Tyngsboro,
MA, May 1985.
Boehm, B. (1989), Software Risk Management, IEEE Computer Society Press, Los
Alamitos, CA.
Boehm, B., T. Gray, and T. Seewaldt (1984), “Prototyping vs. Specifying: A Multi-
Project Experiment,” IEEE Transactions on Software Engineering, May 1984, pp.
133-145.
Boehm, B., and W. Royce (1989), “Ada COCOMO and the Ada Process Model,”
Proceedings, Fifth COCOMO Users’ Group Meeting, Software Engineering
Institute, Pittsburgh, PA, November 1989.
Chidamber, S. and C. Kemerer (1994), “A Metrics Suite for Object Oriented Design,”
IEEE Transactions on Software Engineering, (to appear 1994).
Computer Science and Telecommunications Board (CSTB) National Research Council
(1993), Computing Professionals: Changing Needs for the 1990’s, National
Academy Press, Washington DC, 1993.
Devenny, T. (1976). “An Exploratory Study of Software Cost Estimating at the
Electronic Systems Division,” Thesis No. GSM/SM/765-4, Air Force Institute of
Technology, Dayton, OH.
Gerlich, R., and U. Denskat (1994), “A Cost Estimation Model for Maintenance and High
Reuse,” Proceedings, ESCOM 1994, Ivrea, Italy.
Goethert, W., E. Bailey, M. Busby (1992), “Software Effort and Schedule Measurement:
A Framework for Counting Staff Hours and Reporting Schedule Information.”
CMU/SEI-92-TR-21, Software Engineering Institute, Pittsburgh, PA.
Goudy, R. (1987), “COCOMO-Based Personnel Requirements Model,” Proceedings,
Third COCOMO Users’ Group Meeting, Software Engineering Institute,
Pittsburgh, PA, November 1987.
IFPUG (1994), IFPUG Function Point Counting Practices: Manual Release 4.0,
International Function Point Users’ Group, Westerville, OH.
Kauffman, R., and R. Kumar (1993), “Modeling Estimation Expertise in Object Based
ICASE Environments,” Stern School of Business Report, New York University,
January 1993.
Kemerer, C. (1987), “An Empirical Validation of Software Cost Estimation Models,”
Communications of the ACM, May 1987, pp. 416-429.
Kominski, R. (1991), Computer Use in the United States: 1989, Current Population
Reports, Series
P-23, No. 171, U.S. Bureau of the Census, Washington, D.C., February 1991.
Kunkler, J. (1983), “A Cooperative Industry Study on Software
Development/Maintenance Productivity,” Xerox Corporation, Xerox Square ---
XRX2 52A, Rochester, NY 14644, Third Report, March 1985.
Miyazaki, Y., and K. Mori (1985), “COCOMO Evaluation and Tailoring,” Proceedings,
ICSE 8, IEEE-ACM-BCS, London, August 1985, pp. 292-299.
Parikh, G., and N. Zvegintzov (1983). “The World of Software Maintenance,” Tutorial
on Software Maintenance, IEEE Computer Society Press, pp. 1-3.
Park R. (1992), “Software Size Measurement: A Framework for Counting Source
Statements.” CMU/SEI-92-TR-20, Software Engineering Institute, Pittsburgh,
PA.
Park R, W. Goethert, J. Webb (1994), “Software Cost and Schedule Estimating: A
Process Improvement Initiative”, CMU/SEI-94-SR-03, Software Engineering
Institute, Pittsburgh, PA.
Paulk, M., B. Curtis, M. Chrissis, and C. Weber (1993), “Capability Maturity Model for
Software, Version 1.1”, CMU-SEI-93-TR-24, Software Engineering Institute,
Pittsburgh PA 15213.
Pfleeger, S. (1991), “Model of Software Effort and Productivity,” Information and
Software Technology 33 (3), April 1991, pp. 224-231.
Royce, W. (1990), “TRW’s Ada Process Model for Incremental Development of Large
Software Systems,” Proceedings, ICSE 12, Nice, France, March 1990.
Ruhl, M., and M. Gunn (1991), “Software Reengineering: A Case Study and Lessons
Learned,” NIST Special Publication 500-193, Washington, DC, September 1991.
Selby, R. (1988), “Empirically Analyzing Software Reuse in a Production Environment,”
In Software Reuse: Emerging Technology, W. Tracz (Ed.), IEEE Computer
Society Press, 1988., pp. 176-189.
Selby, R., A. Porter, D. Schmidt and J. Berney (1991), “Metric-Driven Analysis and
Feedback Systems for Enabling Empirically Guided Software Development,”
Proceedings of the Thirteenth International Conference on Software Engineering
(ICSE 13), Austin, TX, May 1316, 1991, pp. 288-298.
Silvestri, G. and J. Lukaseiwicz (1991), “Occupational Employment Projections,”
Monthly Labor Review 114(11), November 1991, pp. 64-94.
SPR (1993), “Checkpoint User’s Guide for the Evaluator”, Software Productivity
Research, Inc., Burlington, MA., 1993.