Configuration Management Metrics

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

METRICS

• Metrics;
CONFIGURATION • a system or standard of measurement.

MANAGEMENT METRICS • Why need Metrics;


• customers and end users require a working, quality product.
• Measuring the process / the product tells us whether we have achieved our
goal.
• To quantify the project’s progress as well as to quantify the attributes of the
product
• A metric enables one to understand and manage the process as well
as to measure the impact of change to the process

1 2

1 2

Which Metric to use? Examples of Product Metrics


• Depends on the need / requirements • Size: • Reliability:
• lines of code, • count of changes required by phase,
• Some organizations use generic metrics used either by the industry or • pages of documentation, • count of discovered defects,
other organization • number and size of test, • defect density = number of
• function count defects/size,
• Some organization generate their metrics specific to a particular task. • count of changed lines of code
• Generally, characteristics of a suitable metrics are that they should
be: • Complexity:
• decision count,
• Collectable • variable count,
• Reproducible • number of modules,
• Pertinent • size/volume,
• System independent • depth of nesting

3 4

3 4

Process Metrics Reviewing the results of the Metrics Program


• Complexity: • Resource metrics: • Once the organization determines the metrics to be implemented, it must
• time to design, • years of experience with team, develop a methodology for reviewing the results of the metrics program.
• code, and test, • years of experience with language,
• defect discovery rate by phase, • years of experience with type of • Metrics are useless if they do not result in improved quality or productivity.
• cost to develop, software,
• number of external interfaces, • support personnel to engineering
personnel ratio, • At a minimum, the organization should:
• defect fix rate
• non-project time to project time ratio • Determine the metric and measuring technique.
• Measure to understand where they stand.
• Methods and tool use:
• number of tools used and why, • Productivity: • Establish worst, best, planned cases.
• project infrastructure tools, • percent time to redesign, • Modify the process or product, depending on results of measurement.
• tools not used and why • percent time to redo, • Remeasure to see what has changed.
• variance of schedule,
• Reiterate
• variance of effort
5 6

5 6

1
Traditional Metrics for SCM IEEE Process for Measurement
Following are metrics that are typically used by organizations that are • IEEE methodology [IEEE 1989], describes the measurement process in
involved in measuring the SCM process: nine stages.
• Average rate of variance from scheduled time • Man-hours per project • These stages may overlap or occur in different sequences, depending
• Rate of first-pass approvals • Schedule variances on organization needs.
• Volume of deviation requests by cause • Tests per requirement • Each of these stages in the measurement process influences the
• The number of scheduled, performed, and • Change category count production of a delivered product with the potential for high
completed configuration management audits by
• Changes by source
reliability.
each phase of the life cycle
• Cost variances
• The rate of new changes being released and the
rate that changes are being verified as completed • Errors per thousand source lines of code (KSLOC)

• The number of completed versus scheduled • Requirements volatility


(stratified by type and priority) actions 7 8

7 8

IEEE Process for Measurement IEEE Process for Measurement


• Certain factors that influence the measurement process include: • Product measures include:

• Errors, faults, and failures:


• the count of defects with respect to human cause, program bugs, and observed system malfunctions
• A firm management commitment to continually assess product and process
maturity, or stability, or both during the project • Mean-time-to-failure, failure rate:
• a derivative measure of defect occurrence and time

• The use of trained personnel in applying measures to the project in a useful • Reliability growth and projection:
way • the assessment of change in failure-freeness of the product under testing or operation

• Remaining product faults:


• Software support tools • the assessment of fault-freeness of the product in development, test, or maintenance

• Completeness and consistency:


• A clear understanding of the distinctions among errors, faults, and failures • the assessment of the presence and agreement of all necessary software system parts

• Complexity:
• the assessment of complicating factors in a system
9 10

9 10

IEEE Process for Measurement


IEEE Process for Measurement
The Nine Stages
• Process measures include: Stage 01 Stage 02 Stage 03 Stage 04
• Plan Organizational • Determine Software • Implement • Select Potential
Strategy Reliability Goals Measurement Measures
Process
• Management control measures address the quantity and distribution of error
and faults and the trend of cost necessary for defect removal.
Stage 08 Stage 07 Stage 06 Stage 05
• Coverage measures allow one to monitor the ability of developers and • Use Software • Assess Reliability • Monitor the • Prepare Data
managers to guarantee the required completeness in all the activities of the Measurement Collection &
Measurement Plan
life cycle and support the definition of correction actions.

• Risk, benefit, and cost evaluation measures support delivery decisions based
Stage 09
both on technical and cost criteria. Risk can be assessed based on residual • Retain Software
faults present in the product at delivery and the cost with the resulting Measurement Data
support activity.
11 12

11 12

2
Stage 1: Plan Organizational Strategy Stage 2: Determine Software Reliability Goals
• Initiate a planning process. • Define the reliability goals for the software being developed to optimize the reliability;
• Form a planning group and review reliability constraints and • considering the realistic assessments of project constraints,
objectives, giving consideration to user needs and requirements. • including size scope, cost, and schedule.

• Identify the reliability characteristics of a software product necessary • Review the requirements for the specific development effort to determine the desired
to achieve these objectives. characteristics of the delivered software.
• Establish a strategy for measuring and managing software reliability. • For each characteristic, identify specific reliability goals that can be demonstrated by
• Document practices for conducting measurements. the software or measured against a particular value or condition.
• Establish an acceptable range of values.
• Consideration should be given to user needs and requirements.

• Establish intermediate reliability goals at various points in the development effort.


13 14

13 14

Stage 3: Implement Measurement Process Stage 4: Select Potential Measures


• Establish a software reliability measurement process that best fits the • Identify potential measures that would be helpful in achieving the
organization’s needs.
• Review the rest of the process and select those stages that best lead to optimum reliability goals established in Stage 2.
reliability
• Some suggestions:
• Select appropriate data collection and measurement practices designed to optimize software
reliability.
• Document the measures required, the intermediate and final milestones when
measurements are taken, the data collection requirements, and the acceptable values for
each measure.
• Assign responsibilities for performing and monitoring measurements and provide the
necessary support for these activities from across the internal organization.
• Initiate a measure selection and evaluation process.
• Prepare training material for training personnel in concepts, principles, and practices of
software reliability and reliability measures.

15 16

15 16

Stage 5: Prepare Data Collection and


Stage 6: Monitor the Measurements
Measurement Plan
• Prepare a data collection and measurement plan for the development and • To manage the reliability and thereby achieve the goals for the
support effort.
• For each potential measure, determine the primitives needed to perform the measurement. delivered product;
• Data should be organized so that information related to events during the development • As the collection and reporting begins, monitor the measurements and the
effort can be properly recorded in a database and retained for historical purposes.
progress made during development
• For each intermediate reliability goal identified in Stage 2, identify the measures • The measurements assist in determining whether the intermediate reliability goals are
needed to achieve this goal. achieved and whether the final goal is achievable.
• Identify the points during development when the measurements are to be taken. • Then analyze the measure and determine if the results are sufficient to satisfy
• Establish acceptable values or a range of values to assess whether the intermediate reliability the reliability goals.
goals are achieved.
• Decide whether a measure’s results assist in affirming the reliability of the
• Include in the plan an approach for monitoring the measurement effort itself. product or process being measured.
• The responsibility for collecting and reporting data, verifying its accuracy, computing • Take corrective action.
measures, and interpreting the results should be described

17 18

17 18

3
Stage 7: Assess Reliability Stage 8: Use Software
• Analyze measurements to ensure that reliability of the delivered • Assess the effectiveness of the measurement effort and perform the
software satisfies the reliability objectives and that the reliability, as necessary corrective action.
measured, is acceptable.
• Conduct a follow-up analysis of the measurement effort to evaluate
the reliability assessment and development practices record lessons
learned and evaluate user satisfaction with the software’s reliability.

19 20

19 20

Stage 9: Retain Software Measurement Data STEPS TO TAKE IN USING METRICS


• Retain measurement data on the software throughout the • Assess the process: determine the level of process maturity.
development and operation phases for use in future projects. • Determine the appropriate metrics to collect.
• Recommend metrics, tools, and techniques.
• This data provides a baseline for reliability improvement and an
opportunity to compare the same measures across completed • Estimate project cost and schedule.
projects. • Collect appropriate level of metrics.
• Construct project database of metrics data, which can be used for analysis
• This information can assist in developing future guidelines and and to track the value of metrics over time.
standards • Cost and schedule evaluation: when the project is complete, evaluate the
initial estimates of cost and schedule for accuracy. Determine which of the
factors might account for discrepancies between predicted and actual
values.
• Form a basis for future estimates
21 22

21 22

Fault Density
• This measure can be used to predict remaining faults by comparison
with expected fault density, determine if sufficient testing has been
completed, and establish standard fault densities for comparison and
prediction.
IEEE Defined Metrics; Fd = F/KSLOC
Few Examples Where:
F = total number of unique faults found in a given interval and resulting in
failures of a specified severity level

KSLOC = number of source lines of executable code and nonexecutable


data declarations, in thousands

23 24

23 24

4
Cumulative Failure Profile Fault-Days Number
• This is a graphical method used to predict; • This measure represents the number of days that faults spend in the
• reliability, system, from their creation to their removal.
• estimate additional testing time to reach an acceptable reliable system, • For each fault detected and removed, during any phase, the number
• identify modules and sub-systems that require additional testing. of days from its creation to its removal is determined (fault-days).
• The fault-days are then summed for all faults detected and removed,
• A plot is drawn of cumulative failures versus a suitable time base. to get the fault-days number at system level, including all faults
detected and removed up to the delivery date.
• In those cases where the creation date of the fault is not known, the
fault is assumed to have been created at the middle of the phase in
which it was introduced.
25 26

25 26

Functional or Modular Test Coverage Requirements Traceability


• This measure is used to quantify a software test coverage index for a • This measure aids in identifying requirements that are either missing
software delivery. From the system’s functional requirements, a cross-
reference listing of associated modules must first be created from, or in addition to, the original requirements.

Functional (Modular) Test Coverage Index = FE/FT TM = R1 / R2 × 100%


• Where:
FE = number of the software functional (modular) requirements for • Where:
which all test cases have been satisfactorily completed R1 = number of requirements met by the architecture
FT = total number of software functional (modular) requirements R2 = number of original requirements

27 28

27 28

Software Maturity Index Number of Conflicting Requirements


• This measure is used to quantify the readiness of a software product. • This measure is used to determine the reliability of a software system
• Changes from previous baselines to the current baselines are an indication of the current product stability. resulting from the software architecture under consideration, as
MT − 𝐹𝑎 + 𝐹𝑐 + 𝐹𝑑𝑒𝑙 represented by a specification based on the entity-relationship-attributed
𝑆𝑀𝐼 = model.
MT
• What is required is a list of the system’s inputs, its outputs, and a list of the
• Where:
functions performed by each program.
SMI = software maturity index • The mappings from the software architecture to the requirements are
MT = number of software functions (modules) in the current delivery
identified.
Fa = number of software functions (modules) in the current delivery that are additions to • Mappings from the same specification item to more than one differing
the previous delivery requirement are examined for requirements inconsistency.
Fc = number of software functions (modules) in the current delivery that include internal
changes from a previous delivery
• Additionally, mappings from more than one specification item to a single
requirement are examined for specification inconsistency.
Fdel = number of software functions (modules) in the previous delivery that are deleted in
the current delivery

29 30

29 30

5
Cyclomatic Complexity Test Coverage
• This measure is used to determine the structured complexity of a • This is a measure of the completeness of the testing process, from both a developer’s
and user’s perspective.
coded module. • The measure relates directly to the development, integration, and operational test stages
of product development.
• The use of this measure is designed to limit the complexity of the
module, thereby promoting understandability of the module. TC(%) = (Implemented capabilities) / (Required capabilities) x
C=E−N+1 (Program primitives tested) / (Total program primitives) x
• Where: 100%

C = complexity • Where:
• Program functional primitives are either modules, segments, statements, branches, or paths
N = number of nodes (sequential groups of program statements) • Data functional primitives are classes of data
• Requirement primitives are test cases or functional capabilities
E = number of edges (program flows between nodes)
31 32

31 32

Mean-Time-to-Failure
• This measure is the basic parameter required by most software
reliability models.
• Detailed record keeping of failure occurrences that accurately track
time (calendar or execution) at which the faults manifest themselves
is essential.
Questions

33 34

33 34

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy