0% found this document useful (0 votes)
32 views

Metrics - Pressman

Uploaded by

yijijah671
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views

Metrics - Pressman

Uploaded by

yijijah671
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 37

Process and Project

Metrics
.
Until you can measure something and express
it in numbers, you have only the beginning of
understanding.

- Lord Kelvin
Until you can measure something and express
it in numbers, you have only the beginning of
understanding.

- Lord Kelvin

Non-software metrics you use everyday:


- Gas tank scale
- Speedometer
- Thermostat in your house
- Battery monitor in your laptop

What would happen if instead these were not numeric? Gas


Tank
What other examples do you have? ☐A Lot
☐Some
A little
Until you can measure something and express
it in numbers, you have only the beginning of
understanding.

- Lord Kelvin

The problem is non-numeric measurements are


subjective… they mean different things to
different people. Numbers are objective… they
mean the same thing to everyone!
Metrics for software
 When asked to measure something, always try to
determine an objective measurement. If not
possible, try to get as close as you can!
A Good Manager
process Measures
process metrics
project metrics
measurement
product metrics
product
What do we
use as a
basis?
• size?
• function?
We need a basis to say 20
defects per X lines of code.
Why is this important?
 1) Because lines of code equals cost
 2) We want our metrics to be valid across
projects of many sizes
 3) Because this helps up understand how big our
program is
Why Do We Measure?
 assess the status of an ongoing project
 track potential risks
 uncover problem areas before they go
“critical,”
 adjust work flow or tasks,
 evaluate the project team’s ability to
control quality of software work products.
Process versus Project Metrics
 Process Metrics - Measure the process to help
update and change the process as needed across
many projects

 Project Metrics - Measure specific aspects of a


single project to improve the decisions made on
that project
Frequently the same measurements can be used
for both purposes
Process Measurement
 We measure the efficacy of a software process indirectly.
 That is, we derive a set of metrics based on the outcomes of
the process
 Outcomes include
 measures of errors uncovered before release of the
software
 defects delivered to and reported by end-users
 work products delivered (productivity)
 human effort expended
 calendar time expended
 schedule conformance
 many others…
 We also derive process metrics by measuring the
characteristics of specific software engineering tasks.
Process Metrics Guidelines
 Use common sense and organizational sensitivity when
interpreting metrics data.
 Provide regular feedback to the individuals and teams who collect
measures and metrics.
 Don’t use metrics to appraise individuals.
 Work with practitioners and teams to set clear goals and metrics
that will be used to achieve them.
 Never use metrics to threaten individuals or teams.
 Metrics data that indicate a problem area should not be
considered “negative.” These data are merely an indicator for
process improvement.
 Don’t obsess on a single metric to the exclusion of other
important metrics.
If I calculate the number of
defects per developer and rank
them, then using that rank
assign salary raises based on
that.
 A. This is good
 B. This is bad
Software Process Improvement

Process model

Process improvement
Improvement goals recommendations

Process metrics SPI

Make your metrics actionable!


Typical Process Metrics
 Quality-related
 focus on quality of work products and deliverables
 Productivity-related
•Correctness
 Production of work-products related to effort expended
•Maintainability
 Statistical SQA data
•Integrity
•Earned Value Analysis
 error categorization & analysis
•Usability
 Defect removal efficiency
 propagation of errors from process activity to activity
Defects found
•Severity of in this (1-5)
errors stage
 Reuse data ---------------------------------------
•MTTF (Mean time to failure)
 The number of components produced
This Stage(Mean
•MTTR + Next Stage
time to and their degree of
repair)
reusability
 Within a single project this can also be a “project metric”.
Across projects this is a “process metric”.
Can you calculate a metric that
records the number of ‘e’ that
appear in a program? A. Yes B.
No
 Should you calculate the number of ‘e’ in a
program?
 A. Yes
 B. No
Effective Metrics
 Simple and computable
 Empirically and intuitively persuasive
 Consistent and objective
 Consistent in use of units and dimensions
 Programming language independent
 Should be actionable
Actionable Metrics
Actionable metrics (or information in general) are
metrics that guide change or decisions about
something

 Actionable: Measure the amount of human effort


versus use cases completed.
 Too high -- more training, more design, etc…
 Very low: maybe we can shorten the schedule

 Not-Actionable: Measure the number of times the


letter “e” appears in code
Think before you measure. Don’t waste people’s time!
Project Metrics
 used to minimize the development schedule by making the
adjustments necessary to avoid delays and mitigate
potential problems and risks
 used to assess product quality on an ongoing basis and,
when necessary, modify the technical approach to improve
quality.
 every project should measure:
 Inputs —measures of the resources (e.g., people, tools)
required to do the work.
 Outputs —measures of the deliverables or work products
created during the software engineering process.
 Results —measures that indicate the effectiveness of the
deliverables.
Typical Project Metrics
 Effort/time per software engineering
task
 Errors uncovered per review hour
 Scheduled vs. actual milestone dates
 Changes (number) and their
characteristics
 Distribution of effort on software
engineering tasks
Metrics Guidelines
 Use common sense and organizational sensitivity when
interpreting metrics data.
 Provide regular feedback to the individuals and teams who
have worked to collect measures and metrics.
 Don’t use metrics to appraise individuals.
 Work with practitioners and teams to set clear goals and
metrics that will be used to achieve them.
 Never use metrics to threaten individuals or teams.
 Metrics data that indicate a problem area should not be
considered “negative.” These data are merely an indicator for
process improvement.
 Don’t obsess on a single metric to the exclusion of other
important metrics.

Same as process metrics guidelines


These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 6/e
and are provided with permission by R.S. Pressman & Associates, Inc., copyright © 1996, 2001, 2005
Typical Size-Oriented
Metrics
 errors per KLOC (thousand lines of code)
 defects per KLOC
 $ per LOC
 pages of documentation per KLOC
 errors per person-month
 Errors per review hour
 LOC per person-month
 $ per page of documentation
Typical Function-Oriented
Metrics
 errors per Function Point (FP)
 defects per FP
 $ per FP
 pages of documentation per FP
 FP per person-month
But.. What is a Function Point?
 Function points (FP) are a unit measure for
software size developed at IBM in 1979 by
Richard Albrecht
 To determine your number of FPs, you classify a
system into five classes:
 Transactions - External Inputs, External Outputs,
External Inquires
 Data storage - Internal Logical Files and External
Interface Files
 Each class is then weighted by complexity as
low/average/high
 Multiplied by a value adjustment factor
But.. What is a Function Point?
Coun Low Average High Tota
t l
External Input x3 x4 x6

External Output x4 x5 x7

External x3 x4 x6
Inquiries
Internal Logic x7 x10 x15
Files
External x5 x7 x10
Interface Files Unadjusted Total:
Value Adjustment Factor:
Total Adjusted Value:
Function Point Example

http://www.his.sunderland.ac.uk/~cs0mel/Alb_Example.doc
Comparing LOC and FP
Program ming LOC per Function point
Language avg. m edian low high
Ada 154 - 104 205
Ass em ble r 337 315 91 694
C 162 109 33 704
C++ 66 53 29 178
COBOL 77 77 14 400
Java 63 53 77 -
JavaScript 58 63 42 75
Perl 60 - - -
PL/1 78 67 22 263
Pow e rbuilde r 32 31 11 105
SAS 40 41 33 49
Sm alltalk 26 19 10 55
SQL 40 37 7 110
Visual Basic 47 42 16 158

Representative values developed by QSM


At IBM in the 70s or 80s (I
don’t remember) they paid
people per line-of-code they
wrote
 What happened?
 A. The best programmers got paid the most
 B. The worst programmers got paid the most
 C. The sneakiest programmers, got paid the most
 D. The lawyers got paid the most
Why Opt for FP?
 Programming language independent
 Used readily countable characteristics that are
determined early in the software process
 Does not “penalize” inventive (short)
implementations that use fewer LOC that other
more clumsy versions
 Makes it easier to measure the impact of
reusable components
Object-Oriented Metrics
 Number of scenario scripts (use-cases)
 Number of support classes (required to
implement the system but are not immediately
related to the problem domain)
 Average number of support classes per key class
(analysis class)
 Number of subsystems (an aggregation of
classes that support a function that is visible to
the end-user of a system)
WebEngineering Project
Metrics
 Number of static Web pages (the end-user has no control over the
content displayed on the page)
 Number of dynamic Web pages (end-user actions result in
customized content displayed on the page)
 Number of internal page links (internal page links are pointers
that provide a hyperlink to some other Web page within the
WebApp)
 Number of persistent data objects
 Number of external systems interfaced
 Number of static content objects
 Number of dynamic content objects
 Number of executable functions
Measuring Quality
 Correctness — the degree to which a program
operates according to specification
 Maintainability—the degree to Verified
which non-conformance
witha reqmts
program
is amenable to change ----------------------------------
MTTC KLOC
 Integrity—the degree to whichMean a program
time to change:is
time to analyze, design,
impervious to outside attack implement and deploy
threat probability
 Usability—the degree to which a program
a change
security - likelihood of repelling is easy
attack
to use
Integrity =  1-(threat*(1-security))
Many options. See ch 12
E.g. t=0.25, s=0.95 --> I=0.99
Defect Removal Efficiency

DRE = E /(E + D)

E is the number of errors found before


delivery of the software to the end-user
D is the number of defects found after
delivery.
Defect Removal Efficiency

DRE = E /(E + D)

Defects found during phase: 10 / (10 + 20) = 33%


Requirements (10)
Design (20) 20 / (20
What + the
are 50) = 28%
rest?
Construction 5 / (5 + 50) = 9%
Implementation (5) 50 / (50 + 100) = 33%
Unit Testing (50)
Testing 100 / (100 + 250) = 28%
Integration Testing (100) 250 / (250 + 5) = 98%
System Testing (250) 5 / (5 + 10) = 33%
Acceptance Testing (5)
By Customer (10)
Metrics for Small
Organizations
 time (hours or days) elapsed from the time a request is made
until evaluation is complete, tqueue.
 effort (person-hours) to perform the evaluation, Weval.
 time (hours or days) elapsed from completion of evaluation to
assignment of change order to personnel, teval.
 effort (person-hours) required to make the change, Wchange.
 time required (hours or days) to make the change, tchange.
 errors uncovered during work to make change, Echange.
 defects uncovered after change is released to the customer base,
Dchange.
Establishing a Metrics
Program
 Set Goals
 Identify your business goals.
 Identify what you want to know or learn.
 Identify your subgoals.
 Identify the entities and attributes related to your subgoals.
 Formalize your measurement goals.
 Determine indicators for goals
 Identify quantifiable questions and the related indicators that you will
use to help you achieve your measurement goals.
 Identify the data elements that you will collect to construct the
indicators that help answer your questions.
 Define Measurements
 Define the measures to be used, and make these definitions
operational.
 Identify the actions that you will take to implement the measures.
 Prepare a plan for implementing the measures.
Metrics give you information!
 Metrics about your process help you determine if
you need to make changes or if your process is
working
 Metrics about your project do they same thing
 Metrics about your software can help you
understand it better, and see where possible
problems may lurk. Let’s see the complexity
measurement (after a few questions…)
Questions
 What are some reasons NOT to use lines of code
to measure size?
 What do you expect the DRE rate will be for the
implementation (or construction) phase of the
software lifecycle?
 What about for testing?
 Give an example of a usability metric?
 According to the chart, Smalltalk is much more
efficient than Java and C++. Why don’t we use it
for everything?

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy