Configuration Management Metrics
Configuration Management Metrics
Configuration Management Metrics
• Metrics;
CONFIGURATION • a system or standard of measurement.
1 2
1 2
3 4
3 4
5 6
1
Traditional Metrics for SCM IEEE Process for Measurement
Following are metrics that are typically used by organizations that are • IEEE methodology [IEEE 1989], describes the measurement process in
involved in measuring the SCM process: nine stages.
• Average rate of variance from scheduled time • Man-hours per project • These stages may overlap or occur in different sequences, depending
• Rate of first-pass approvals • Schedule variances on organization needs.
• Volume of deviation requests by cause • Tests per requirement • Each of these stages in the measurement process influences the
• The number of scheduled, performed, and • Change category count production of a delivered product with the potential for high
completed configuration management audits by
• Changes by source
reliability.
each phase of the life cycle
• Cost variances
• The rate of new changes being released and the
rate that changes are being verified as completed • Errors per thousand source lines of code (KSLOC)
7 8
• The use of trained personnel in applying measures to the project in a useful • Reliability growth and projection:
way • the assessment of change in failure-freeness of the product under testing or operation
• Complexity:
• the assessment of complicating factors in a system
9 10
9 10
• Risk, benefit, and cost evaluation measures support delivery decisions based
Stage 09
both on technical and cost criteria. Risk can be assessed based on residual • Retain Software
faults present in the product at delivery and the cost with the resulting Measurement Data
support activity.
11 12
11 12
2
Stage 1: Plan Organizational Strategy Stage 2: Determine Software Reliability Goals
• Initiate a planning process. • Define the reliability goals for the software being developed to optimize the reliability;
• Form a planning group and review reliability constraints and • considering the realistic assessments of project constraints,
objectives, giving consideration to user needs and requirements. • including size scope, cost, and schedule.
• Identify the reliability characteristics of a software product necessary • Review the requirements for the specific development effort to determine the desired
to achieve these objectives. characteristics of the delivered software.
• Establish a strategy for measuring and managing software reliability. • For each characteristic, identify specific reliability goals that can be demonstrated by
• Document practices for conducting measurements. the software or measured against a particular value or condition.
• Establish an acceptable range of values.
• Consideration should be given to user needs and requirements.
13 14
15 16
15 16
17 18
17 18
3
Stage 7: Assess Reliability Stage 8: Use Software
• Analyze measurements to ensure that reliability of the delivered • Assess the effectiveness of the measurement effort and perform the
software satisfies the reliability objectives and that the reliability, as necessary corrective action.
measured, is acceptable.
• Conduct a follow-up analysis of the measurement effort to evaluate
the reliability assessment and development practices record lessons
learned and evaluate user satisfaction with the software’s reliability.
19 20
19 20
21 22
Fault Density
• This measure can be used to predict remaining faults by comparison
with expected fault density, determine if sufficient testing has been
completed, and establish standard fault densities for comparison and
prediction.
IEEE Defined Metrics; Fd = F/KSLOC
Few Examples Where:
F = total number of unique faults found in a given interval and resulting in
failures of a specified severity level
23 24
23 24
4
Cumulative Failure Profile Fault-Days Number
• This is a graphical method used to predict; • This measure represents the number of days that faults spend in the
• reliability, system, from their creation to their removal.
• estimate additional testing time to reach an acceptable reliable system, • For each fault detected and removed, during any phase, the number
• identify modules and sub-systems that require additional testing. of days from its creation to its removal is determined (fault-days).
• The fault-days are then summed for all faults detected and removed,
• A plot is drawn of cumulative failures versus a suitable time base. to get the fault-days number at system level, including all faults
detected and removed up to the delivery date.
• In those cases where the creation date of the fault is not known, the
fault is assumed to have been created at the middle of the phase in
which it was introduced.
25 26
25 26
27 28
27 28
29 30
29 30
5
Cyclomatic Complexity Test Coverage
• This measure is used to determine the structured complexity of a • This is a measure of the completeness of the testing process, from both a developer’s
and user’s perspective.
coded module. • The measure relates directly to the development, integration, and operational test stages
of product development.
• The use of this measure is designed to limit the complexity of the
module, thereby promoting understandability of the module. TC(%) = (Implemented capabilities) / (Required capabilities) x
C=E−N+1 (Program primitives tested) / (Total program primitives) x
• Where: 100%
C = complexity • Where:
• Program functional primitives are either modules, segments, statements, branches, or paths
N = number of nodes (sequential groups of program statements) • Data functional primitives are classes of data
• Requirement primitives are test cases or functional capabilities
E = number of edges (program flows between nodes)
31 32
31 32
Mean-Time-to-Failure
• This measure is the basic parameter required by most software
reliability models.
• Detailed record keeping of failure occurrences that accurately track
time (calendar or execution) at which the faults manifest themselves
is essential.
Questions
33 34
33 34