0% found this document useful (0 votes)
22 views

SEBook

Introduction to software engineering
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

SEBook

Introduction to software engineering
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 215

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/330994015

Introduction to Software Engineering

Book · February 2019

CITATIONS READS
2 19,536

1 author:

Dr H Shaheen
University of West London
49 PUBLICATIONS 62 CITATIONS

SEE PROFILE

All content following this page was uploaded by Dr H Shaheen on 28 October 2019.

The user has requested enhancement of the downloaded file.


INTRODUCTION
TO
SOFTWARE
ENGINEERING

Mr. M. Benjmin Jashva


(Assistant Professor, CSE Department)
St. Peter’s Engineering College, Hyderabad, (TS), INDIA
Ms. P. Tara Kumari
(Assistant Professor, CSE Department)
St. Peter’s Engineering College, Hyderabad, (TS), INDIA
Dr. H. Shaheen
(Associate Professor, CSE Department)
St. Peter’s Engineering College, Hyderabad, (TS), INDIA
Mr. Chaitanya Kishor Reddi M.
(Assistant Professor, CSE Department)
St. Peter’s Engineering College, Hyderabad, (TS), INDIA
INTRODUCTION TO SOFTWARE ENGINEERING

Copyright © : Dr. H. Shaheen


Publishing Rights : VSRD Academic Publishing
A Division of Visual Soft India Pvt. Ltd.

ISBN-13: 978-93-87610-28-6
FIRST EDITION, NOVEMBER 2018, INDIA

Printed & Published by:


VSRD Academic Publishing
(A Division of Visual Soft India Pvt. Ltd.)
Disclaimer: The author(s) are solely responsible for the contents compiled in this book.
The publishers or its staff do not take any responsibility for the same in any manner.
Errors, if any, are purely unintentional and readers are requested to communicate such
errors to the Authors or Publishers to avoid discrepancies in future.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval
system or transmitted, in any form or by any means, electronic, mechanical, photo-
copying, recording or otherwise, without the prior permission of the Publishers & Author.

Printed & Bound in India

VSRD ACADEMIC PUBLISHING


A Division of Visual Soft India Pvt. Ltd.

REGISTERED OFFICE
154, Tezabmill Campus, Anwarganj, KANPUR – 208003 (UP) (IN)
Mb: 9899936803, Web: www.vsrdpublishing.com, Email: vsrdpublishing@gmail.com

MARKETING OFFICE
340, FF, Adarsh Nagar, Oshiwara, Andheri(W), MUMBAI–400053 (MH)(IN)
Mb: 9956127040, Web: www.vsrdpublishing.com, Email: vsrdpublishing@gmail.com
PREFACE

This book entitled “Software Engineering” has been


written in accordance with the syllabus prescribed by the
‘JNTUH R2016’ for the Third Year, B.Tech students of
Engineering colleges affiliated to JNTUH.

This book comprises of five chapters which covers


Jawaharlal Nehru Technological University ,Hyderabad
syllabus. The main emphasis of the book is to explain in
a simple manner, the logical concepts that will enable
even the beginners to understand them without
difficulty.

Systematic care has been taken to support the topics


with necessary illustrations and relevant diagrams to
make learning much easier. It is believed that this book
shall serve all the requirements of Final Year
Engineering students.

It covers all the important questions that have appeared


in the previous years of Jawaharlal Nehru Technological
University, Hyderabad Examinations. University
questions for regulations R2016 are given at the end.

Your suggestions are most welcome.

Author(s)
ACKNOWLEDGEMENT

We sincerely thank the Almighty for being with us


through all stages of the preparation of this book.

Firstly, we would like to express our sincere gratitude to


The Chairman T.Bala Reddy, St.Peters
Engineering College for the continuous support
motivation, and immense knowledge.

Special Acknowledgement are due to our Secretary Mr.


T.V. Reddy, St.Peters Engineering College for his
continuous support for the successful completion of this
book.

We thank Dr. M. Narendra Kumar, Principal,


St.Peters Engineering College for his source of
inspiration.

We thank Dr.Sharada Varalakshmi, Head of the


Department, who have always supported and motivated
us to accomplish this incredible work.

We thank our Friends and Colleagues for their


encouragement and support in various stages of writing
this book.

We express our sincere thanks to our publisher VSRD


Academic Publishing (A Division of Visual Soft
India Private Limited), for their help and co-operation
in publishing this book.

Author(s)
CONTENTS

CHAPTER 1 : INTRODUCTION ........................................... 1


1.1. THE EVOLVING ROLE OF SOFTWARE........................................... 1
1.2. A GENERIC VIEW OF PROCESS SOFTWARE ENGINEERING –
A LAYERED TECHNOLOGY ........................................................... 6
1.3. PROCESS MODELS .................................................................... 17

CHAPTER 2 : SOFTWARE REQUIREMENTS ................. 27


2.1. TYPES OF REQUIREMENT.......................................................... 27
2.2. FUNCTIONAL AND NON-FUNCTIONAL REQUIREMENTS ............ 28
2.3. FUNCTIONAL REQUIREMENTS .................................................. 28
2.4. NON-FUNCTIONAL REQUIREMENTS ......................................... 29
2.5. NON-FUNCTIONAL CLASSIFICATIONS ....................................... 29
2.6. DOMAIN REQUIREMENTS ........................................................ 32
2.7. USER REQUIREMENTS .............................................................. 34
2.8. SYSTEM REQUIREMENTS .......................................................... 35
2.9. INTERFACE SPECIFICATION ....................................................... 40
2.10. THE SOFTWARE REQUIREMENTS DOCUMENT .......................... 41
2.11. REQUIREMENTS ENGINEERING PROCESS.................................. 45
2.12. REQUIREMENTS ELICITATION AND ANALYSIS ........................... 46
2.13. REQUIREMENTS DISCOVERY..................................................... 48
2.14. ETHNOGRAPHY ........................................................................ 53
2.15. SYSTEM MODELS ...................................................................... 58
2.16. CONTEXT MODELS.................................................................... 59
2.17. BEHAVIORAL MODELS .............................................................. 61
2.18. STRUCTURED METHODS........................................................... 74

CHAPTER 3 : DESIGN ENGINEERING............................. 77


3.1. DESIGN PROCESS AND DESIGN QUALITY .................................. 77
3.2. DESIGN MODELS ...................................................................... 82
3.3. SEVERE SYSTEM FAILURE -- 14A ............................................... 97

CHAPTER 4 : TESTING STRATEGIES ........................... 119


4.1. A STRATEGIC APPROACH TO SOFTWARE TESTING.................. 119
4.2. TEST STRATEGIES FOR CONVENTIONAL SOFTWARE ............... 123
4.3. WHITE-BOX TESTING .............................................................. 130
4.4. GRAPH MATRICES .................................................................. 135
4.5. BLACK-BOX TESTING .............................................................. 139
4.6. THE ART OF DEBUGGING........................................................ 146
4.7. PRODUCT METRICS ................................................................ 148
4.8. ISO 9126 QUALITY FACTORS................................................... 150
4.9. A FRAMEWORK FOR PRODUCT METRICS ............................... 151
4.10. METRICS FOR ANALYSIS MODEL............................................. 154
4.11. METRICS FOR DESIGN MODEL ................................................ 157
4.12. CLASS-ORIENTED METRICS: THE MOOD METRICS SUITE......... 161
4.13. METRICS FOR PROCESS AND PRODUCTS ................................ 164
4.14. SOFTWARE MEASUREMENT ................................................... 165

CHAPTER 5 : RISK MANAGEMENT .............................. 169


5.1. REACTIVE VS PROACTIVE RISK STRATEGIES ............................ 169
5.2. SOFTWARE RISKS ................................................................... 170
5.3. RISK IDENTIFICATION ............................................................. 170
5.4. ASSESSING PROJECT RISK ....................................................... 171
5.5. RISK COMPONENTS AND DRIVERS ......................................... 172
5.6. RISK PROJECTION ................................................................... 172
5.7. RISK MITIGATION MONITORING AND MANAGEMENT
(RMMM) ................................................................................ 176
5.8. RMMM PLAN ......................................................................... 178
5.9. QUALITY MANAGEMENT........................................................ 179
5.10. SOFTWARE QUALITY ASSURANCE .......................................... 181
5.11. SOFTWARE REVIEWS .............................................................. 182
5.12. SOFTWARE RELIABILITY.......................................................... 187
5.13. ISO 9000 QUALITY STANDARDS .............................................. 188

CHAPTER 6 : QUESTION BANK, TUTORIAL


QUESTIONS AND SYLLABUS ......................................... 190
6.1. QUESTION BANK .................................................................... 190
6.2. TUTORIAL QUESTIONS............................................................ 196
6.3. SYLLABUS ............................................................................... 200
CHAPTER 1
Introduction
KEY CONCEPTS
1.1. THE EVOLVING ROLE OF SOFTWARE................................ 1
1.2. A GENERIC VIEW OF PROCESS SOFTWARE ENGINEERING –
A LAYERED TECHNOLOGY ............................................ 6
1.3. PROCESS MODELS ............................................................ 17

1.1. THE EVOLVING ROLE OF SOFTWARE

Definition of software
Software is a set of items or objects that form a “configuration” that
includes
 Programs
 Documents
 Data
Software’s Dual Role
 Software is a product
o Delivers computing potential
o Produces, manages, acquires, modifies, displays, or
transmits information
 Software is a vehicle for delivering a product
o Supports or directly provides system functionality
o Controls other programs (e.g., an operating system)
o Effects communications (e.g., networking software)
o Helps build other software (e.g., software tools)
Software Characteristics
Software is developed or engineered; it is not manufactured in the
classical sense.
Although some similarities exist between software development &
hardware manufacturing both the activities are different. In both
2/ Introduction to Software Engineering

activities high quality is achieved through good design, but


manufacturing phase will introduce quality problems that are non-
existent in software. Both activities depend on people but
relationship between people applied and work accomplished is
different.
Software does not wear out. However it deteriorates due to change

Fig.: Failure curve for h/w


It indicates that hardware exhibits high failure rates early in its life.
Defects are corrected, and failure rate drops to a steady-state level
for some period of time. As time passes, the failure rate rises again
as hardware components suffer from cumulative effects of dust,
temperature and other environmental maladies. Simply hardware
begins to wear out.

Fig.: Failure curve or s/w


Introduction /3

Software doesn’t wear out so it should take form of idealized curve.


Undiscovered defects will cause high failure rates early in its life of a
program. However these are corrected and curve flattens the actual
curve shows that during its life software will undergo change. As
changes are made, it is likely that errors will be introduced, causing
failure rate curve to spike again. Before curve can return to original
state, another change is requested, causing curve to spike again.
Slowly failure rate level begins to rise – the software is deteriorating
due to change.
Although the industry is moving towards component based
construction, most software continues to be custom built
Custom-built: building according to needs of customer
The main of component-based construction is reuse. In the hardware
world, component reuse is a natural part of engineering process. In
software world it has only begun to achieve on a broad scale.
Evolution of software:
 1950’s-60’s (mid): Batch orientation, limited distribution
 1960’s (mid)-1970’s (mid): multiuser, real time, database,
product s/w
 1970’s mid-1980’s (late): distributed systems, low cost h/w,
and consumer impact s/w
 1980’s (late)-2000: object-oriented techniques, artificial neural
networks.
Changing Nature of Software
 Seven Broad Categories of software are challenges for software
engineers
o System software: System software is a collection of
programs written to service other programs
 Software is determinate is the order and timing
of its inputs, processing, and outputs is
predictable (Ex compiler, file management
utilities)
 Indeterminate if the order and timing of its
4/ Introduction to Software Engineering

input, processing, and outputs are not


predictable in advance. (Ex: networking
software, operating system components)
o Application software: This is designed to help users in
order to perform a specific task. Applications in this area
process business or technical data. Ex: C,JAVA
o Engineering and scientific software: Engineering and
scientific software have been characterized by "number
crunching" algorithms (which performs complex, lengthy
and complex numeric calculations).However modern
applications are moving away from numeric algorithms
to computer-aided design, system simulation.
o Embedded software: It resides within a product or
system and is used to implement and control features
and functions of end-user and system itself.
 Ex: Keypad control of microwave oven, fuel
control etc
o Product-line software: Designed to provide a specific
capability for SE by many different customers.
 Ex: Computer Graphics, entertainment, data
base management.
o Web-applications: These are applications which are
accessed over a network.
o Artificial intelligence software: Artificial intelligence (AI)
software makes use of non numeric algorithms to solve
complex problems that are not amenable to computation
or straightforward analysis
 Ex: Robotics, Expert systems, game playing etc.
Software Myths
 Propagate misinformation and confusion
 Three types of myths
o Management myth
o Customer myth
o Practitioner’s myth
Introduction /5

Management Myths:
Myth (1)
Already we have a book of standards and procedures for building
software wont that provide my people with everything they need to
know?
Reality: The book of standards may very well exist but is it used?
Are software practitioners aware of its existence? Is it complete? Is it
adaptable?
Myth (2)
If we get behind schedule we can add more programmers and can
catch up.
Reality: As new people are added, people who were working must
spend time educating the newcomers, thereby reducing amount of
time spent on productive development effort.
Myth (3)
Outsourcing the software project to third party, we can relax and let
that party build it.
Reality: If an organization doesn’t understand how to manage and
control software projects internally, will face struggle when it
outsources projects.
Customer Myths:
Myth (1)
General statement of objective is enough to begin writing programs,
the details can be filled in later.
Reality: Unambiguous requirements can be developed only through
efficient and continuous communication between developer and
customer.
Myth (2)
Software requirements continually change but change can be easily
accommodated because software is flexible.
Reality: When requirement changes are requested early cost impact
is relatively small. However as time passes cost impact grows rapidly.
Practitioner Myths:
Myth (1)
6/ Introduction to Software Engineering

Once the program is written, the job has been done.


Reality: Industry data indicate that between 60 and 80 percent of all
effort expended on software will be expended after it is delivered to
customer for first time.
Myth (2)
Until the program is running, there is no way of assessing the quality.
Reality: Formal technical reviews have been found more effective
than actual testing
Myth (3)
The only deliverable work product is the working program
Reality: A working program is only part of software configuration.
Documentation provides a foundation for successful engineering.
Myth (4)
Software Engineering creates voluminous and unnecessary
documentation and invariably slows down software development.
Reality: S.E is not about creating documents. It is about creating
quality. Better quality leads to reduced rework.

1.2. A GENERIC VIEW OF PROCESS SOFTWARE


ENGINEERING – A LAYERED TECHNOLOGY

Software Engineering Definitions


Software engineering: (1) The application of systematic, disciplined
quantifiable approach to the development, operation, and
maintenance of software; that is , the application of engineering to
software (2)the study of approaches in (1).

Quality focus: Bedrock that supports software Engineering.


Process - Foundation for software Engineering. It defines a
Introduction /7

framework that must be established for effective delivery of software


engineering technology. This provides a basis for management
control of software projects.
Methods: Provide technical how-to’s for building Software. Methods
encompass a broad array of tasks that include
 Communication
o Requirements analysis
o Design modeling
o Program construction
o Testing
o Support
Tools: Provide semi-automatic and automatic support to methods.
When well-integrated, it is a CASE system.(Computer-Aided
Software Engineering)
A Process framework
A Process Framework establishes the foundation for a complete
software process by identifying a small number of framework
activities that are applicable to all s/w projects, regardless of
size/complexity and set of umbrella activities which are applicable
across entire s/w process
Each framework activity is populated by a set of software engineering
action-collection of related tasks that produce a major engineering
work product
These are the 5 framework activities
 Communication: This activity involves heavy communication and
collaboration with customer and encompasses requirements
gathering.
 Planning: It establishes a software project plan for software
engineering work that follows. It describes technical tasks to be
conducted, risks that are likely, resources that will be required,
work product to be produced and a work schedule.
 Modeling: This activity encompasses creating models that allows
customer to better understand software requirements and design.
8/ Introduction to Software Engineering

 Construction: This activity combines code generation and


testing.
 Deployment : The software is delivered to customer who
evaluates the delivered product and provides feedback.
Each Software engineering action is represented by a number of task
sets. The task set that best accommodates the needs o project and
characteristics of team is chosen.
Introduction /9
10 / Introduction to Software Engineering

Umbrella activities
 Software project tracking and control: asses progress against
project plan and take necessary action to maintain schedule.
 Formal technical reviews: Assesses software engineering work
products in an effort to uncover or remove errors before they
are propagated to next action.
 Software quality assurance: defines and conducts activities
required to ensure quality
 Software configuration management; manages change
throughout the process.
 Work product preparation and production: encompasses
activities required to create work products such as documents,
forms, logs reports etc.
 Reusability management: Defines criteria for work product reuse
and establishes mechanisms to achieve reusable components.
 Measurement: Defines and collects process, project and product
measures that assist team in delivering software that meets
customer needs
 Risk management: assesses risks that may affect outcome of
project and quality
Capability Maturity Model Integration (CMMI)
Developed by SEI(Software Engineering institute)
 Assess the process model followed by an organization and rate
the organization with different levels
 A set of software engineering capabilities should be present as
organizations reach different levels of process capability and
maturity.
CMMI process Meta model can be represented in different ways
 A continuous model
 A staged model
Continuous model:
 Levels are called capability levels.
 Describes a process in 2 dimensions
Introduction / 11

Each process area is assessed against specific goals and practices and
is rated according to the following capability levels.
Six levels of CMMI
 Level 0:Incomplete
 Level 1:Performed
 Level 2:Managed
 Level 3:Defined
 Level 4:Quantitatively managed
 Level 5:Optimized
INCOMPLETE
 Process is adhoc. Objective and goal of process areas are not
known
Performed
All the specific goals and practices process area have been satisfied
but performance may not be stable they do not meet some specific
objectives such as quality, cost and schedule
Managed
Activities are monitored, reviewed, evaluated and controlled to
achieve a given purpose and cost, schedule, quality are maintained.
These companies have some planned processes within teams and
teams are made to represent them for projects handled by them.
However processes are not standardized across organization.
Defined
12 / Introduction to Software Engineering

All level2 criteria have been satisfied and in addition process is well
defined and is followed throughout the organization
Quantitatively Managed
Metrics and indicators are available to measure the process and
quality
Optimized (Perfect & Complete)
 Continuous process improvement based on quantitative feedback
from the user
 Use of innovative ideas and techniques, statistical quality control
and other methods for process improvement.
In addition to specific goals and practices for each process area 5
generic goals correspond to capability levels. In order to achieve a
particular capability level the generic goal for that level and generic
practices correspond to the goal must be achieved.
Staged model
 This model is used if you have no clue of how to improve the
process for quality software.
 It gives a suggestion of what things other organizations have
found helpful to work first
 Levels are called maturity levels
Introduction / 13

Process Patterns
 The software process can be defined as a collection of patterns
that define a set of activities, actions, work tasks, work products
and/or related behaviors.
 A process pattern provides us with a template. A consistent
method for describing an important characteristic of s/w process.
 Patterns can be defined at any level of abstraction. In some cases
it is used to define complete process (e.g. prototyping).In other
situations patterns can be used to describe an important
framework activity(e.g. planning) or a task within a framework
activity(e.g. project-estimating).
Pattern Template
 Pattern Name: The pattern given a meaningful name that
describes its function within the software process. (e.g.
requirements unclear)
 Intent: The objective of pattern is described briefly. For ex the
objective of pattern is to build a model that can be assessed
iteratively by stakeholders.
 Type: pattern type is specified. Suggests 3 types
 Task pattern: defines a software engineering action or work task
that is part o process (e.g. requirements gathering)
 Stage pattern: defines a framework activity for the process (e.g.
communication)
 Phase Pattern: defines sequence of framework activities that
occur with the process (ex spiral model or prototyping)
 Initial Context: The condition under which pattern applies are
described. Prior to initiation of pattern the following conditions
must be met
For ex:1) stakeholders have been identified 2) mode of
communication between software team and stakeholder has been
established. 3) Problem to be solved has been identified by
stakeholders.4) an initial understanding of requirements has been
developed
Problem: The problem to be solved by pattern is described.
14 / Introduction to Software Engineering

Ex: Requirements are hazy or non-existent i.e., stakeholders are


unsure of what they want. That is they cannot describe software
requirements in detail.
Solution: The implementation of pattern is described ex: A
description of prototyping process is described here.
Resulting Context: The conditions that ill result once the pattern has
been successfully implemented are described.
Ex: 1)a software prototype that identifies basic requirements are
approved by stakeholders
And a prototype may evolve through a series of increments to
become production software.
Related Patterns: A list of process patterns that are related to this
one are provided.
Ex: customer-communication, iterative design &development,
customer assessment, requirements extraction.
Known uses and examples: The specific instances in which pattern
are applicable are indicated.
Process Assessment
 It attempts to keep a check on the current state of the software
process with the intention of improving it.

PROCESS ASSESSMENT
Software Process

Software Process
Assessment

Software Process Motivates Capability determination


improvement
Introduction / 15

 Standard CMMI assessment for process improvement


(SCAMPI): provides a 5 step process assessment model that
incorporates initiating, diagnosing, establishing, acting and
learning. The SCAMPI uses SEI CMMI as basis for assessment.
 CMM based appraisal for internal process improvement:
provides a diagnostic technique for assessing relative maturity of
a software organization
 SPICE(ISO/IEC 15504):standard defines a set of requirements
or software process assessment
 ISO 9001:2000 for software: is a generic standard that applies
to any organization that wants to improve overall quality of
products, systems or services that it provides.
Personal and Team Process Models
The best software process is one that is close to the people who will
be doing the work. Each software engineer would create a process
that best fits his or her needs, and at the same time meets the
broader needs of the team and the organization. Alternatively, the
team itself would create its own process, and at the same time meet
the narrower needs of individuals and the broader needs of the
organization.
Personal software process (PSP)
The personal software process (PSP) emphasizes personal
measurement of both the work product that is produced and the
resultant quality of the work product. The PSP process model defines
five framework activities: planning, high-level design, high level
design review, development, and postmortem.
Planning: This activity isolates requirements and, base on these
develops both size and resource estimates. In addition, a defect
estimate is made. All metrics are recorded on worksheets or
templates. Finally, development tasks are identified and a project
schedule is created.
High level design: External specifications for each component to be
constructed are developed and a component design is created.
Prototypes are built when uncertainty exists. All issues are recorded
16 / Introduction to Software Engineering

and tracked.
High level design review: Formal verification methods are applied to
uncover errors in the design. Metrics are maintained for all
important tasks and work results.
Development: The component level design is refined and reviewed.
Code is generated, reviewed, compiled, and tested. Metrics are
maintained for all important task and work results.
Postmortem: Using the measures and metrics collected the
effectiveness of the process is determined. Measures and metrics
should provide guidance for modifying the process to improve its
effectiveness.PSP stresses the need for each software engineer to
identify errors early and, as important, to understand the types of
errors that he is likely to make.
PSP represents a disciplined, metrics-based approach to software
engineering.
Team software process (TSP): The goal of TSP is to build a “self-
directed project team that organizes itself to produce high-quality
software. The following are the objectives for TSP:
 Build self-directed teams that plan and track their work, establish
goals, and own their processes and plans. These can be pure
software teams or integrated product teams (IPT) of 3 to about
20 engineers.
 Show managers how to coach and motivate their teams and how
to help them sustain peak performance.
 Accelerate software process improvement by making CMM level
5 behaviors normal and expected.
 Provide improvement guidance to high-maturity organizations.
 Facilitate university teaching of industrial-grade team skills.
A self-directed team defines:
 Roles and responsibilities for each team member.
 Tracks quantitative project data.
 Identifies a team process that is appropriate for the project.
 A strategy for implementing the process.
Introduction / 17

 Defines local standards that are applicable to the teams software


engineering work.
 Continually assesses risk and reacts to it.
 Tracks, manages, and reports project status.
TSP defines the following framework activities: launch, high-level
design, implementation, integration and test, and postmortem.
TSP makes use of a wide variety of scripts, forms, and standards that
serve to guide team members in their work.
Scripts define specific process activities and other more detailed work
functions that are part of the team process.
Each project is “launched” using a sequence of tasks.
The following launch script is recommended
 Review project objectives with management and agree on and
document team goals.
 Establish team roles.
 Define the team’s development process.
 Make a quality plan and set quality targets.
 Plan for the needed support facilities.

1.3. PROCESS MODELS

 Process Models help in the software development


 They Guide the software team through a set of framework
activities
 Process Models may be linear, incremental or evolutionary
THE WATERFALL MODEL
 Used when requirements are well understood in the beginning
 Also called classic life cycle model.
 A systematic, sequential approach to Software development
 Begins with customer specification of Requirements and
progresses through planning, modeling, construction and
deployment
18 / Introduction to Software Engineering

Disadvantages:
 Real projects rarely follow the sequential flow since they are
always iterative
 The model requires requirements to be explicitly spelled out in
the beginning, which is often difficult
 A working model is not available until late in the project time
span
THE INCREMENTAL PROCESS MODEL
 Linear sequential model is not suited for projects which are
iterative in nature
 Incremental model suits such projects
 Used when initial requirements are reasonably well-defined and
compelling need to provide limited functionality to users quickly
and then refine and expand on that functionality in later releases
It combines elements of waterfall model in an iterative fashion.
The incremental model applies linear sequences in a staggered
fashion as calendar time progresses. Each linear sequence provides
deliverable increments of software. For ex word processing software
developed sing incremental paradigm might deliver basic file
management, editing and document production functions in 1st
increment ;more sophisticated editing and document production
capabilities in 2nd increment, spelling and grammar checking in 3rd
increment; etc
Introduction / 19

 1st increment constitutes Core product


 Basic requirements are addressed
 Core product undergoes detailed evaluation by the customer
As a result, plan is developed for the next increment Plan addresses
the modification of core product to better meet the needs of
customer
 Process is repeated until the complete product is produced
The incremental process model unlike prototyping and other
evolutionary approaches, is iterative in nature. But unlike
prototyping, the incremental model focuses on delivery of an
operational product with each increment.
This model is particularly useful when staffing is unavailable for a
complete implementation by business deadline that has been
established for the project.
THE RAD MODEL (Rapid Application Development)
 An incremental software process model
20 / Introduction to Software Engineering

 Having a short development cycle


 High-speed adoption of the waterfall model using a component
based construction approach
 Creates a fully functional system within a very short span time of
60 to 90 days
 Multiple software teams work in parallel on different functions
 Modeling encompasses three major phases: Business modeling,
Data modeling and process modeling
 Construction uses reusable components, automatic code
generation and testing
 Problems in RAD
 Requires a number of RAD teams
 Requires commitment from both developer and customer for
rapid-fire completion of activities otherwise it fails
 If system cannot be modularized properly project will fail.
 Not suited when technical risks are high

The RAD Model


Team # n
Modeling
Business modeling
Data modeling
Process modeling

Construction
Team # 2 Component reuse
automatic code
generation
Communication Modeling testing

Business modeling
Data modeling
Process modeling

Construction
Planning Team # 1 Component reuse
automatic code
generation
testing
Modeling
Business modeling Deployment
Data modeling integration
Process modeling delivery
feedback

Construction
Component reuse
automatic code
generation
testing

12
Introduction / 21

EVOLUTIONARY PROCESS MODEL


 Software evolves over a period of time
 Business and product requirements often change as development
proceeds making a straight-line path to an end product
unrealistic
 Evolutionary models are iterative and as such are applicable to
modern day applications
 Types of evolutionary models
o Prototyping
o Spiral model
o Concurrent development model
PROTOTYPING
 Mock up or model( throw away version) of a software product
 Used when customer defines a set of objective but does not
identify input, output, or processing requirements
 Developer is not sure of:
o efficiency of an algorithm
o adaptability of an operating system
o human/machine interaction
22 / Introduction to Software Engineering

 Begins with requirement gathering


 Identify whatever requirements are known
 Outline areas where further definition is mandatory
 A quick design occur
 Quick design leads to the construction of prototype
 Prototype is evaluated by the customer
 Requirements are refined
 Prototype is turned to satisfy the needs of customer, while at the
same time enabling developer to better understand what needs
to be done.
 Ideally prototype serves as a mechanism for identifying software
requirements.
Disadvantages:
 In a rush to get it working, overall software quality or long term
maintainability are generally overlooked
 The developer often makes implementation compromises in
order to get a prototype working quickly. An inappropriate OS
or PL may be used simply because it is available and known; an
inefficient algorithm may be implemented simply to demonstrate
capability. After a time, developer may become comfortable
with these choices and forget all the reasons why they were
inappropriate.
Spiral model
 This is a model proposed by Boehm it is an evolutionary model
which combines the best feature of the classical life cycle and the
iterative nature of prototype model
 Include new element : Risk element
 Starts in middle and continually visits the basic tasks of
communication, planning, modeling,construction and
deployment
 Using spiral model, software is developed in a series of
evolutionary releases. During early iterations, the release might
be a model or prototype. During later iterations increasingly
Introduction / 23

more complete versions of engineered system are produced


 Realistic approach to the development of large scale system and
software
 Unlike other process models that end when software is delivered,
the spiral model can be adapted to apply throughout the life of
computer software. Therefore the first circuit around the spiral
represents a “concept development project” which starts at the
core of the spiral and continues for multiple iterations until
concept development is complete. If the concept is to be
developed into an actual product, process proceeds outward on
the spiral and a “new product development project
“commences. The new product will evolve through a number of
iterations around the spiral. Later, a circuit around spiral might
be used to represent a “product enhancement project”. The
spiral remains operative until the software is retired.
 It is difficult to convince customers that the evolutionary
approach is controllable.

20
24 / Introduction to Software Engineering

CONCURRENT DEVELOPMENT MODEL


 This is sometimes called concurrent engineering. This is
represented schematically as a series of framework activities,
software engineering actions and tasks, and their associated
states.

 Fig provides schematic representation of one software


engineering task within modeling activity. The activity modeling
may in any one of these states at any given time. Similarly other
activities can be represented in a similar manner.
 All activities exist concurrently but reside in different states.
Introduction / 25

 For ex. early in a project communication activity has completed


its first iteration and exists in awaiting changes state. The
modeling activity which existed in none state while initial
communication was completed now makes a transition to
underdevelopment state. If however customer indicates changes
in requirements must be made, the modeling activity moves
from under development to awaiting changes state
 The concurrent process model defines a series of events that will
trigger transition from state to state for each software
engineering activities, actions and tasks.
 This model is applicable to all types of software development
and provides correct picture of accurate state of a project.
UNIFIED PROCESS MODEL
 This model is proposed by Ivan jacobson, grady booch,
rumbaugh
 This is a use-case driven, architecture-centric,iterative and
incremental software process.
 This is a framework for object-oriented software engineering sing
UML.
 The inception phase of Unified Process(UP) encompasses other
customer communication and planning activities. By
collaborating with customer, business requirements for the
software are identified. Fundamental business requirements are
expressed through a set of preliminary use-cases that describes
what features and functions are desired by major class of seers.
Architecture at this point is nothing but outline of features and
functions. Planning identifies resources, assesses major risks, and
defines a schedule.
 The elaboration phase encompasses communication, planning
and modeling. Elaboration refines and expands preliminary use-
cases that were developed as part of inception phase. Expands
architectural representation to include different views of
software. The use-case model, the analysis model, the design
model, the implementation model and the deployment model.
Modification to the plan may be made at its time
26 / Introduction to Software Engineering

 Construction of UP is identical to construction activity of generic


process models. All necessary features and functions are
implemented in source code. As components are being
developed unit test is being conducted.
 The transition phase of UP encompasses later stages of
construction activity and first part of deployment activity.
Software is given to end-users for beta-testing. In addition
software team creates installation procedures, user manuals etc.
 The production phase coincides with deployment activity. During
its phase on-going use of software is monitored, and defect
reports and request for changes are submitted and evaluated.


CHAPTER 2
Software Requirements
KEY CONCEPTS
2.1. TYPES OF REQUIREMENT................................................. 27
2.2. FUNCTIONAL AND NON-FUNCTIONAL REQUIREMENTS 28
2.3. FUNCTIONAL REQUIREMENTS ........................................ 28
2.4. NON-FUNCTIONAL REQUIREMENTS ............................... 29
2.5. NON-FUNCTIONAL CLASSIFICATIONS ........................... 29
2.6. DOMAIN REQUIREMENTS ............................................... 32
2.7. USER REQUIREMENTS...................................................... 34
2.8. SYSTEM REQUIREMENTS ................................................. 35
2.9. INTERFACE SPECIFICATION ............................................. 40
2.10. THE SOFTWARE REQUIREMENTS DOCUMENT................. 41
2.11. REQUIREMENTS ENGINEERING PROCESS ........................ 45
2.12. REQUIREMENTS ELICITATION AND ANALYSIS ............... 46
2.13. REQUIREMENTS DISCOVERY ........................................... 48
2.14. ETHNOGRAPHY ............................................................... 53
2.15. SYSTEM MODELS ............................................................. 58
2.16. CONTEXT MODELS .......................................................... 59
2.17. BEHAVIORAL MODELS .................................................... 61
2.18. STRUCTURED METHODS.................................................. 74

rocess of finding out, analyzing and

P
2.1.
documenting
the requirements is called requirements engineering

TYPES OF REQUIREMENT

User requirements
 Statements in natural language plus diagrams of the services the
system provides and its operational constraints. Written for
customers.
System requirements
 A structured document setting out detailed descriptions of the
28 / Introduction to Software Engineering

system’s functions, services and operational constraints. Defines


what should be implemented so may be part of a contract
between client and contractor.

2.2. FUNCTIONAL AND NON-FUNCTIONAL REQUIREMENTS

Functional requirements
 Statements of services the system should provide how the system
should react to particular inputs and how the system should
behave in particular situations.
Non-functional requirements
 Constraints on the services or functions offered by the system
such as timing constraints, constraints on the development
process, standards, etc.
Domain requirements
 Requirements that come from the application domain of the
system and that reflect characteristics of that domain

2.3. FUNCTIONAL REQUIREMENTS

 Describe functionality or system services.


 Depend on the type of software, expected users and the type of
system where the software is used.
 Functional user requirements may be high-level statements of
what the system should do but functional system requirements
should describe the system services in detail.
The LIBSYS system
 A library system that provides a single interface to a number of
databases of articles in different libraries.
 Users can search for, download and print these articles for
personal study.
Examples of functional requirements
 The user shall be able to search either all of the initial set of
databases or select a subset from it.
 The system shall provide appropriate viewers for the user to read
Software Requirements / 29

documents in the document store.


 Every order shall be allocated a unique identifier (ORDER_ID)
which the user shall be able to copy to the account’s permanent
storage area.
 In principle, requirements should be both complete and
consistent.
Complete
 They should include descriptions of all facilities required.
Consistent
 There should be no conflicts or contradictions in the descriptions
of the system facilities.
 In practice, it is impossible to produce a complete and consistent
requirements document.

2.4. NON-FUNCTIONAL REQUIREMENTS

 These define system properties and constraints e.g. reliability,


response time and storage requirements. Constraints are I/O
device capability, system representations, etc.
 Non-functional requirements may be more critical than
functional requirements. If these are not met, the system is
useless.

2.5. NON-FUNCTIONAL CLASSIFICATIONS

1. Product requirements: These requirements specify product


behavior. Examples include performance requirements on how fast
the system must execute and how much memory it requires;
reliability requirements that set out the acceptable failure rate;
portability requirements; and usability requirements.
2. Organizational requirements: These requirements are derived
from policies and procedures in the customer s and developer s
organization. Examples include process standards that must be used;
implementation requirements such as the programming language or
design method used; and delivery requirements that specify when the
product and: Its documentation is to be delivered.
30 / Introduction to Software Engineering

3. External requirements: This broad heading covers all requirements


that are derived from factors external to the system and its
development process. These may include interoperability
requirements that define how the system interacts with systems in
other organizations: legislative requirements that must be followed to
ensure that the system operates within the law; and ethical
requirements. Ethical requirements are requirements placed on a
system to ensure that it will be acceptable to its users and the general
public.

Fig 1: Non-functional requirements examples


Product requirement
 The user interface for LIBSYS shall be implemented as simple
HTML without frames or Java applets.
Organizational requirement
 The system development process and deliverable documents shall
conform to the process and deliverables defined in XYZCo-SP-
STAN-95.
External requirement
 The system shall not disclose any personal information about
customers apart from their name and reference number to the
Software Requirements / 31

operators of the system.


A common problem with non-functional requirements is that they
can be difficult to verify. Users or customers often state these
requirements as general goals these vague goals cause problems for
system developers as they leave scope for interpretation and
subsequent dispute once the system is delivered. As an illustration of
this problem, consider below this show a system goal relating to the
usability of a traffic control system and is typical of how a user might
express usability requirements. I have rewritten it to show how the
goal can be expressed as a 'testable' non-functional requirement.
Goal
 A general intention of the user such as ease of use.
Verifiable non-functional requirement
 A statement using some measure that can be objectively tested.
A system goal
 The system should be easy to use by experienced controllers and
should be organized in such a way that user errors are
minimized.
A verifiable non-functional requirement
 Experienced controllers shall be able to use all the system
functions after a total of two hours training. After this training,
the average number of errors made by experienced users shall
not exceed two per day.
Whenever possible we should write non-functional requirements
quantitatively so that they can be objectively tested.
You can measure these characteristics when the system is being
tested to check whether or not the system has met its non-functional
requirements.
32 / Introduction to Software Engineering

In practice, however, customers for a system may find it practically


impossible to translate their goals into quantitative requirements. For
some goals, such as maintainability, there are no metrics that can be
used.

2.6. DOMAIN REQUIREMENTS

Domain requirements are derived from the application domain of the


system rather than from the specific needs of system users.
Domain requirements are important because they often reflect
fundamentals of the application domain. If these requirements are
not satisfied, it may be impossible to make the system work
satisfactorily.
Software Requirements / 33

Library system domain requirements


 There shall be a standard user interface to all databases which
shall be based on the Z39.50 standard.
 Because of copyright restrictions, some documents must be
deleted immediately on arrival. Depending on the user’s
requirements, these documents will either be printed locally on
the system server for manually forwarding to the user or routed
to a network printer.
The first requirement is a design constraint. It specifies that the user
interface to the database must be implemented according to a
specific library standard. The developers therefore have to find out
about that standard before starting the interface design. The second
requirement has been introduced because of copyright laws that
apply to material used in libraries. It specifies that the system must
include an automatic delete-on-print facility for some classes of
document. This means that users of the library system cannot have
their own electronic copy of the document.
The deceleration of the train shall be computed as:
Dtrain = Dcontrol + Dgradient
Where Dgradient is 9.81ms2 * compensated gradient/alpha and where
the values of 9.81ms2 /alpha are known for different types of train.
This system automatically stops a train if it goes through a red signal.
This requirement states how the train deceleration is computed by
the system. It uses domain-specific terminology. To understand it,
you need some understanding of the operation of railway systems
and train characteristics.
Domain requirements problems
Requirements are expressed in the language of the application
domain;
This is often not understood by software engineers developing the
system.
Domain experts may leave information out of a requirement simply
because it is so obvious to them. However, it may not be obvious to
34 / Introduction to Software Engineering

the developers of the system, and they may therefore implement the
requirement in the wrong way.

2.7. USER REQUIREMENTS

Should describe functional and non-functional requirements in such a


way that they are understandable by system users who do not have
detailed technical knowledge.
User requirements are defined using natural language, tables and
diagrams as these can be understood by all users.
However various problems can arise when requirements are written
in natural language sentences in a text document:
Lack of clarity
It is sometimes difficult to use language in a precise and
unambiguous way without making the document wordy and difficult
to read.
Requirements confusion
Functional and non-functional requirements tend to be mixed-up.
Requirements amalgamation
Several different requirements may be expressed together as a single
requirement.
EXAMPLE
Discover ambiguities and omissions for part of a ticket-issuing system
 An automated ticket-issuing system sells rail tickets. Users select
their destination and input a credit card and a personal
identification number. The rail ticket is issued and their credit
card account charged. When the user presses the start button, a
menu display of potential destinations is activated, along with a
message to the user to select a destination. Once a destination
has been selected, users are requested to input a personal
identification number when the credit transaction has been
validated, the ticket is issued.
Sol:
 Can a customer buy several tickets for the same destination
Software Requirements / 35

together or must they be bought one at a time?


 Can customers cancel a request if a mistake has been made?
 How should the system respond if an invalid card is input?
 What happens if customers try to put their card in before
selecting a destination (as they would in ATM machines)?
 Must the user press the start button again if they wish to buy
another ticket to a different destination?
To minimize misunderstandings when writing user requirements,
following guidelines are recommended.
 Invent a standard format and use it for all requirements.
 Use language in a consistent way. Use shall for mandatory
requirements, should for desirable requirements.
 Use text highlighting to identify key parts of the requirement.
 Avoid the use of computer jargon.
EXAMPLE
Using the technique suggested here, where natural language is
presented in a standard way; write plausible user requirements for
the following function
The cash dispensing function in a bank ATM.
SOL: The user shall enter their bank card in the slot provided.
Following the appropriate prompts for a cash withdrawal, the user
shall enter the requested amount. If the amount requested is not
greater than the amount in the account, cash shall be dispensed, and
the card shall be returned.

2.8. SYSTEM REQUIREMENTS

System requirements are expanded versions of the user requirements


that are used by software engineers as the starting point for the
system design. They add detail and explain how the user
requirements should be provided by the system.
Ideally, the system requirements should simply describe the external
behavior of the system and its operational constraints. They should
not be concerned with how the system should be designed or
36 / Introduction to Software Engineering

implemented
Natural language is often used to write system requirements
specifications as well as user requirements. However, because system
requirements are more detailed than user requirements, natural
language specifications can be confusing and hard to understand:
 Natural language understanding relies on the specification
readers and writers using the same words for the same concept.
This leads to misunderstandings because of the ambiguity of
natural language ex shoes must be worn and dogs must be
carried.
 A natural language requirements specification is over flexible.
You can say the same thing in completely different ways. It is up
to the reader to find out when requirements are the same and
when they are distinct
 There is no easy way to modularize natural language
requirements. It may be difficult to find all related requirements.
Because of these problems, requirements specifications written in
natural language are prone to misunderstandings. These are often not
discovered until later phases of the software process and may then
be very expensive to resolve.
Software Requirements / 37

Structured language specifications


 The freedom of the requirements writer is limited by a
predefined template for requirements.
 All requirements are written in a standard way.
 The terminology used in the description may be limited.
The advantage is that the most of the expressiveness of natural
language is maintained but a degree of uniformity is imposed on the
specification
When a standard form is used for specifying functional requirements,
the following information should be included:
 Definition of the function or entity.
 Description of inputs and where they come from.
 Description of outputs and where they go to.
 Indication of other entities required(requires part)
 Description of the action to be taken.
38 / Introduction to Software Engineering

 Pre and post conditions (if appropriate).


 The side effects (if any) of the function.

Using formatted specifications removes some of the problems of


natural language specification. Variability in the specification is
reduced and requirements are organized more effectively. However,
it is difficult to write requirements in an unambiguously, particularly
when complex computations are required.
To address this problem, you can add extra information to natural
language requirements using tables or graphical models of the
system.
Tables are particularly useful when there are a number of possible
alternative situations and you need to describe the actions to be
taken for each of these.
Software Requirements / 39

Graphical models are most useful when you need to show how state
changes or where you need to describe a sequence of actions.
Sequence diagrams
 These show the sequence of events that take place during some
user interaction with a system.
 You read them from top to bottom to see the order of the
actions that take place.
 Cash withdrawal from an ATM
o Validate card,
o Handle request,
o Complete transaction.
40 / Introduction to Software Engineering

Fig 2: Sequence Diagram for cash withdrawal from an ATM

2.9. INTERFACE SPECIFICATION

Almost all software systems must operate with existing systems that
have already been implemented and installed in an environment. If
the new system and the existing systems must work together, the:
interfaces of existing systems have to be precisely specified.
There are three types of interface: that may have to be defined:
1. Procedural interfaces where existing programs or sub-systems offer
a range of services that are accessed by calling interface procedures.
These interfaces are sometimes called Application Programming
Interfaces (APl).
Software Requirements / 41

2. Data structures that are passed from one sub-system to another.


Graphical data models are the best notations for this type of
description.
3. Representations of data (such as the ordering of bits) that have
been established for an existing sub-system. The best way to describe
these is probably to use a diagram of the structure with annotations
explaining the function of each group of bits.
The below procedure is an example of a procedural interface
definition defined in Java.in this case, the interface is the procedural
interface offered by a print server. This manages a queue of requests
to print files on different printers. Users may examine the queue
associated with a printer and may remove their print jobs from that
queue. They may also switch jobs from one printer to another.

2.10. THE SOFTWARE REQUIREMENTS DOCUMENT

The software requirements document (sometimes called the software


requirements specification or SRS) is the official statement of what
the system developers should implement. It should include both the
user requirements for a system and a detailed specification of the
system requirements.
In some cases, the user and system requirements may be integrated
into a single description. In other cases, the user requirements are
defined in an introduction to the system requirements specification.
42 / Introduction to Software Engineering

If there are a large number of requirements, the detailed system


requirements may be presented in a separate document.

Fig 3: software requirements specification


A number of large organizations, such as the US Department of
Defense and the IEEE, have defined standards for requirements
documents.
Software Requirements / 43

IEEE standard suggests the following structure for requirements


documents:
1. Introduction
1.1. Purpose of the requirements document
1.2. Scope of the product
1.3. Definitions, acronyms and abbreviations
1.4. References
1.5. Overview of the remainder of the document
2. General description
2.1. Product perspective
2.2. Product functions
2.3. User characteristics
2.4. General constraints
2.5. Assumptions and dependencies
3. Specific requirements cover functional, non functional and
interface requirements. The requirements may document
external interfaces, describe system functionality and
performance, specify logical database requirements, design
constraints, emergent system properties and quality
characteristics.
4. Appendices
5. Index
Although the IEEE standard is not ideal, it contains a great deal of
good advice on how to write requirements and how to avoid
problems. It is too general to be an organizational standard in its own
right.
The below table illustrates a possible organization for a requirements
document that is based on the IEEE standard. However, the below
fig have extended this to include information about predicted system
evolution. This was first proposed by Heninge
The information that is included in a requirements document must
depend on the type of software being developed and the approach
to development that is used. If an evolutionary approach is adopted
for a software product (say), the requirements document will leave
out many of detailed chapters suggested above.
44 / Introduction to Software Engineering
Software Requirements / 45

The focus will be on defining the user requirements and high-level,


non-functional system requirements. In this case, the designers and
programmers use their judgment to decide how to meet the outline
user requirements for the system
For long documents, it is particularly important to include a
comprehensive table of contents and document index so that readers
can find the information that they need.

2.11. REQUIREMENTS ENGINEERING PROCESS

 The goal of the requirements engineering process is to create and


maintain a system requirements document.
 The overall process includes four high-level requirements
engineering sub-processes.
o These are concerned with assessing whether the system is
useful to the business (feasibility study);
o discovering requirements (elicitation and analysis);
o converting these requirements into some standard form
(specification);
o Checking that the requirements actually define the
system that the customer wants (validation).
 Figure illustrates the relationship between these activities. It also
shows the documents produced at each stage of the
requirements engineering process.

Fig 4: Requirements Engineering Process


46 / Introduction to Software Engineering

Feasibility studies
 For all new systems, the requirements engineering process should
start with a feasibility study. The input to the feasibility study is a
set of preliminary business requirements, an outline description
of the system and how the system is intended to support business
processes.
 The results of the feasibility study should be a report that
recommends whether or not it is worth carrying on with the
requirements engineering and system development process
 Technical feasibility: It is carried out to determine whether the
company has the capability in terms of software, hardware &
personnel expertise to handle completion of project.
 Economic feasibility: I f benefits outweigh costs then decision is
made to implement.
 Operational feasibility: How well a proposed system solves
problems?
 In a feasibility study, you may consult information sources such
as the managers of the departments where the system will be
used, software engineers who are familiar with the type of system
that is proposed, technology experts and end-users of the
system. Normally, you should try to complete a feasibility study
in two or three weeks.

2.12. REQUIREMENTS ELICITATION AND ANALYSIS

 The next stage of the requirements engineering process is


requirements elicitation and analysis. In this activity, software
engineers work with customers and system end-users to find out
about the application domain, what services the system should
provide, the required performance of the system, hardware
constraints, and so on.
 Requirements elicitation and analysis may involve a variety of
people in an organization. The term stakeholder is used to refer
to any person or group who will be affected by the system,
directly or indirectly. Stakeholders include end-users who
interact with the system and everyone else in an organization
Software Requirements / 47

that may be affected by its installation


Eliciting and understanding stakeholder requirements is difficult for
several reasons:
 Stakeholders often don't know what they want from the
computer system except in the most general terms.
 Stakeholders naturally express requirements in their own terms
and with implicit knowledge of their own work. Requirements
engineers, without experience in the customer's domain, must
understand these requirements.
 Different stakeholders have different requirements, which they
may express in different ways.
 Political factors may influence the requirements of the system.
For example, managers may demand specific system
requirements that will increase their influence in the organization.
 New requirements may emerge from new stakeholders who were
not originally consulted.
System stakeholders for a bank ATM includes:
 Current bank customers who receive services from the system
 Representatives from other banks who have reciprocal
agreements that allow
 each other's ATMs to be used
 Managers of bank branches who obtain management information
from the system
 Counter staff at bank branches who are involved in the day-to-
day running of
 the system
 Database administrators who are responsible for integrating the
system with
 the bank's customer database
 Bank security managers who must ensure that the system will not
pose a security hazard
 The bank's marketing department who are likely be interested in
using the system as a means of marketing the bank
48 / Introduction to Software Engineering

 Hardware and software maintenance engineers who are


responsible for maintaining and upgrading the hardware and
software.
 National banking regulators who are responsible for ensuring that
the system conforms to banking regulations.
The process activities in requirements elicitation and analysis phase
are:
 Requirements discovery: This is the process of interacting with
stakeholders in the system to collect their requirements. Domain
requirements from stakeholders and documentation are also
discovered during this activity.
 Requirements classification and organization: This activity takes
the unstructured collection of requirements, groups related
requirements and organizes them into coherent clusters.
 Requirements prioritization and negotiation Inevitably, where
multiple stakeholders are involved, requirements will conflict.
This activity is concerned with prioritizing requirements, and
finding and resolving requirements conflicts through negotiation.
 Requirements documentation the requirements are documented
and input into the next round of the spiral.

2.13. REQUIREMENTS DISCOVERY

 Requirements discovery is the process of gathering information


about the proposed and existing systems and distilling the user
and system requirements from this information.
 Techniques of requirements discovery include viewpoints,
interviewing, scenarios and ethnography.
Viewpoints
 These requirements sources (stakeholders, domain, and systems)
can all be represented as system viewpoints, where each
viewpoint presents a sub-set of the requirements for the system.
View Point Identification
 Providers of services to the system and receivers of system
services
Software Requirements / 49

 Systems that should interface directly with the system being


specified
 Regulations and standards that apply to the system
 The sources of system business and non-functional requirements
 Engineering viewpoints
 Marketing and other viewpoints that generate requirements on
the product features expected by customers and how the system
should reflect the external image of the organization
Viewpoints can be used as a way of classifying stakeholders
 Interactor viewpoints represent people or other systems that
interact directly with the system. In the bank ATM system,
examples of interactor viewpoints are the bank's customers and
the bank's account database.
 Indirect viewpoints represent stakeholders who do not use the
system themselves but who influence the requirements in some
way. In the bank ATM system, examples of indirect viewpoints
are the management of the bank and the bank security staff.
 Domain viewpoints represent domain characteristics and
constraints that influence the system requirements. In the bank
ATM system, an example of a domain viewpoint would be the
standards that have been developed for interbank
communications.

Fig 5: Viewpoint of LIBSYS


50 / Introduction to Software Engineering

Interviewing
Interviews may be of two types:
 Closed interviews where the stakeholder answers a predefined set
of questions.
 Open interviews where there is no predefined agenda. The
requirements engineering team explores a range of issues with
system stakeholders and hence develops a better understanding
of their needs.
However, interviews are not so good for understanding the
requirements from the application domain.
It is hard to elicit domain knowledge during interviews for two
reasons:
 All application specialists use terminology and jargon that is
specific to a domain. It is impossible for them to discuss domain
requirements without using this terminology. They normally use
terminology in a precise and subtle way that is easy for
requirements engineers to misunderstand.
 Some domain knowledge is so familiar to stakeholders that either
they find it difficult to explain or they think it is so fundamental
that it is not worth mentioning. For example, for a librarian, it is
understood that all acquisitions are catalogued before they are
added to the library. However, this may not be obvious to the
interviewer so it is not taken into account in the requirements.
Effective interviewers have two characteristics:
 They are open-minded, avoid preconceived ideas about the
requirements and are willing to listen to stakeholders. If the
stakeholder comes up with surprising requirements, they are
willing to change their mind about the system.
 They prompt the interviewee to start discussions with a question.
Scenarios
 Scenarios can be particularly useful for adding detail to an
outline requirements description. They are descriptions of
example interaction sessions. Each scenario covers one or more
possible interactions.
Software Requirements / 51

 Scenario may include:


o A description of what the system and users expect when
the scenario starts
o A description of the normal flow of events in the
scenario
o A description of what can go wrong and how this is
handled
o Information about other activities that might be going on
at the same time
o A description of the system state when the scenario
52 / Introduction to Software Engineering

finishes.
 Scenarios may be written as text, supplemented by diagrams,
screen shots, and so on.
 Simple text scenario, consider how a user of the LIBSYS library
system may use the system. This scenario is shown in Figure 6.
Use-Cases
 Use-cases are a scenario-based technique for requirements
elicitation which were first introduced in the objectory method
 Use-case identifies the type of interaction and the actors
involved.
 Use-cases identify the individual interactions with the system.
 Actors in the process are represented as stick figures, and each
class of interaction is represented as a named ellipse. The set of
use-cases represents all of the possible interactions to be
represented in the system requirements.

Figure 6 develops the LIBSYS example and shows other use-cases in


that environment

Fig 6: Use-case diagram for LIBSYS


Software Requirements / 53

2.14. ETHNOGRAPHY

Ethnography is an observational technique that can be used to


understand social and organizational requirements. An analyst
immerses him or herself in the working environment where the
system will be used. He or she observes the day-to-day work and
notes made of the actual tasks in which participants are involved.
Ethnography is particularly effective at discovering two types of
requirements:
 Requirements that are derived from the way in which people
actually work
 Requirements that are derived from cooperation and awareness
of other people's activities.
Requirements Validation
Concerned with demonstrating that the requirements define the
system that the customer really wants.
 Requirements error costs are high so validation is very important
o Fixing a requirements error after delivery may cost up to
100 times the cost of fixing an implementation error.
During the requirements validation process, checks should be carried
out on the requirements in the requirements document. These
checks include:
 Validity. Does the system provide the functions which best
support the customer’s needs?
 Consistency. Are there any requirements conflicts?
 Completeness. Are all functions required by the customer
included?
 Realism. Can the requirements be implemented given available
budget and technology
 Verifiability. Can the requirements be checked?
Requirements Validation Techniques
 Requirements reviews Systematic manual analysis of the
requirements.
 Prototyping Using an executable model of the system to check
54 / Introduction to Software Engineering

requirements covered.
 Test-case generation Developing tests for requirements to check
testability.
Requirements reviews
 Regular reviews should be held while the requirements definition
is being formulated.
 Both client and contractor staff should be involved in reviews.
 Requirements reviews can be informal or formal.
 Informal reviews simply involve contractors discussing
requirements with as many system stakeholders as possible.
 In a formal requirements review, the development team should
'walk' the client through the system requirements, explaining the
implications of each requirement.
 Reviewers may also check for:
o Verifiability. Is the requirement realistically testable?
o Comprehensibility. Is the requirement properly
understood?
o Traceability. Is the origin of the requirement clearly
stated?
o Adaptability. Can the requirement be changed without a
large impact on other requirements?
Requirements Management
 The requirements for large software systems are always changing.
 Requirements management is the process of understanding and
controlling changes to system requirements. You need to keep
track of individual requirements
 Requirements are inevitably incomplete and inconsistent
o New requirements emerge during the process as business
needs change and a better understanding of the system is
developed;
o Different viewpoints have different requirements and
these are often contradictory.
Software Requirements / 55

Enduring and volatile requirements

Fig 7: Change in business objectives


 As the requirements definition is developed, you normally
develop a better understanding of users needs. This feeds
information back to the user, who may then propose a change to
the requirements (Figure 7). Furthermore, it may take several
years to specify and develop a large system. Over that time, the
system's environment and the business objectives may change
 From an evolution perspective, requirements fall into two classes:
 Enduring requirements. Stable requirements derived from the
core activity of the customer organisation. E.g. a hospital will
always have doctors, nurses, etc. May be derived from domain
models
 Volatile requirements. Requirements which change during
development or when the system is in use.
56 / Introduction to Software Engineering

Requirements management planning


Planning is an essential first stage in the requirements management
process. Requirements management is very expensive. During the
requirements management stage, you have to decide on
Requirements identification: How requirements are uniquely
identified;
A change management process: This is the set of activities that assess
the impact and cost of changes.
Traceability policies these policies define the relationships between
requirements, and between the requirements and the system design
that should be recorded and how these records should be maintained
CASE tool support The tool support required to help manage
requirements change;
Traceability
Traceability is concerned with the relationships between
requirements, their sources and the system design
Source traceability Links from requirements to stakeholders who
proposed these requirements;
Software Requirements / 57

Requirements traceability Links between dependent requirements;


you use this information to assess how many requirements are likely
to be affected by a proposed change
Design traceability Links from the requirements to the design; you
use this information to assess the impact of proposed requirements
changes on the system design and implementation
Traceability information is often represented using traceability
matrices
The below table shows a simple traceability matrix that records the
dependencies between requirements. A 'D' in the row/column
intersection illustrates that the requirement in the row depends on
the requirement named in the column; an 'R' means that there is
some other, weaker relationship between the requirements. For
example, they may both define the requirements for parts of the
same subsystem.
Traceability matrices can be generated automatically from the
database

Case Tool Support


Requirements management needs automated support; the CASE
tools for this should be chosen during the planning phase. You need
tool support for:
58 / Introduction to Software Engineering

Requirements storage The requirements should be maintained in a


secure, managed data store that is accessible to everyone involved in
the requirements engineering process.
Change management The process of change management (Figure
7.1) is simplified if active tool support is available.
Traceability management As discussed above, tool support for
traceability allows related requirements to be discovered.
Requirements Change Management
Requirements change management (Figure 7.1) should be applied to
all proposed changes to the requirements.

Fig 7.1: Requirements management


Problem analysis and change specification The process starts with an
identified requirements problem or, sometimes, with a specific
change proposal. During this stage, the problem or the change
proposal is analyzed to check that it is valid.
Change analysis and costing The effect of the proposed change is
assessed using traceability information and general knowledge of the
system requirements.
Change implementation The requirements document and, where
necessary, the system design and implementation are modified.

2.15. SYSTEM MODELS

 These models are graphical representations that describe business


processes, the problem to be solved and the system that is to be
developed.
 System modelling helps the analyst to understand the
functionality of the system and models are used to communicate
Software Requirements / 59

with customers.
 Different models present the system from different perspectives
o External perspective showing the system’s context or
environment;
o Behavioral perspective showing the behaviour of the
system;
o Structural perspective showing the system or data
architecture.
 A system model is an abstraction of the system being studied
 Examples of the types of system models that you might create
during the analysis process are:
o A data- flow model Data-flow models show how data is
processed at different Stages in the system.
o A composition model A composition or aggregation
model shows how entities in the system are composed of
other entities.
o An architectural model Architectural models show the
principal sub-systems that make up a system.
o A classification model Object class/inheritance diagrams
show how entities have common characteristics.
o A stimulus-response model a stimulus-response model,
or state transition diagram, shows how the system reacts
to internal and external events.

2.16. CONTEXT MODELS

 At an early stage in the requirements elicitation and analysis


process we should decide 0n the boundaries of the system. This
involves working with system stakeholders to distinguish what is
the system and what is the system's environment.
 Social and organizational concerns may affect the decision on
where to position system boundaries.
 Once some decisions on the boundaries of the system have been
made, part of the analysis activity is the definition of that context
and the dependencies that a system has on its environment.
60 / Introduction to Software Engineering

Normally, producing a simple architectural model is the first step


in this activity
 Architectural models show the system and its relationship with
other systems.
 High-level architectural models are usually expressed as simple
block diagrams where each sub-system is represented by a named
rectangle, and lines indicate associations between sub-systems
 From Figure 8, we see that each ATM is connected to an
account database, a local branch accounting system, a security
system and a system to support machine maintenance. The
system is also connected to a usage database that monitors how
the network of ATMs is used and to a local branch counter
system. This counter system provides services such as backup
and printing.
 Architectural models describe the environment of a system.
However, they do not show the relationships between the other
systems in the environment and the system that is being
specified. External systems might produce data for or consume
data from the system.

Fig 8: Architectural model of an ATM


 Simple architectural models are normally supplemented by other
models, such as process models, that show the process activities
supported by the system.
 Data-flow models may also be used to show the data that is
Software Requirements / 61

transferred between the system and other systems in its


environment.

Fig 8.1: Specifying the equipment required


Figure 8.1 illustrates a process model for the process of procuring
equipment in an organization. This involves specifying the equipment
required, finding and choosing suppliers, ordering the equipment,
taking delivery of the equipment and testing it after delivery. When
specifying computer support for this process, we have to decide
which of these activities will actually be supported. The other
activities are outside the boundary of the system. In Figure 8.1, the
dotted line encloses the activities that are within the system
boundary.

2.17. BEHAVIORAL MODELS

Behavioral models are used to describe the overall behaviour of a


system.
Two types of behavioral model are:
 Data processing models that show how data is processed as it
moves through the system;
62 / Introduction to Software Engineering

 State machine models that show the systems response to events


Data flow models
 Data flow diagrams (DFDs) may be used to model the system’s
data processing.
 These show the processing steps as data flows through a system.
 DFDs are an intrinsic part of many analysis methods.
 Simple and intuitive notation that customers can understand.
 The notation used in these models represents functional
processing (rounded rectangles), data stores (rectangles) and
data movements between functions (labeled arrows).

Fig8.2: Data flow diagram of order processing


A data-flow model, which shows the steps involved in processing an
order for goods (such as computer equipment) in an organization, is
illustrated in Figure 8.1. This particular model describes the data
processing in the Place equipment order activity in the overall
process model shown in Figure 8.2. The model shows how the order
for the goods moves from process to process. It also shows the data
stores (Orders file and Budget file) that are involved in this process.
 Data-flow models are valuable because tracking and documenting
how the data associated with a particular process moves through
the system helps analysts understand what is going on
 Data-flow models show a functional perspective where each
transformation represents a single function or process. They are
Software Requirements / 63

particularly useful during the analysis of requirements


 That is, they show the entire sequence of actions that take place
from an input being processed to the corresponding output that
is the system's response. Figure 8.3 illustrates this use of data
flow diagrams.

Fig 8.3: DFD for Insulin Pump


State-Machine models
 These model the behaviour of the system in response to external
and internal events.
 They show the system’s responses to stimuli so are often used for
modelling real-time systems.
 State machine models show system states as nodes and events as
arcs between these odes. When an event occurs, the system
moves from one state to another.
State Charts are an integral part of the UML and are used to
represent state machine models
State-charts allow the decomposition of a model into sub-models. A
brief description of the actions is included following the ‘do’ in each
state.
This approach to system modeling is illustrated in Figure 8.3. This
diagram shows a state machine model of a simple microwave oven
equipped with buttons to set the power and the timer and to start
the system.
Sequence of actions in using the microwave is:
64 / Introduction to Software Engineering

 Select the power level (either half-power or full-power).


 Input the cooking time.
 Press Start, and the food is cooked for the given time.

Fig8.4: State-Machine Model of a Micro Wave Oven


For safety reasons, the oven should not operate when the door is
open and, on completion of cooking, a buzzer is sounded. We can
see that the system responds initially to either the full-power or the
half-power button. Users can change their mind after selecting one
of these and press the other button. The time is set and, if the door
is closed, the Start button is enabled. Pushing this button starts the
oven operation and cooking takes place for the specified time.
 In a detailed system specification, you have to provide more
detail about both the stimuli and the system states (Figure 8.4).
This information may be maintained in a data dictionary or
encyclopedia.
Software Requirements / 65

 The problem with the state machine approach is that the number
of possible states increases rapidly. For large system models,
therefore, some structuring of these state models is necessary.
One way to do this is by using the notion of a super state that
encapsulates a number of separate states. This super state looks
like a single state on a high-level model but is then expanded in
more detail on a separate diagram. To illustrate this concept,
66 / Introduction to Software Engineering

consider the Operation state in Figure 8.4. This is a super state


that can be expanded, as illustrated in Fig 8.5.

Fig 8.5: Sub-states of an operation


 The Operation state includes a number of sub-states. It shows
that operation starts with a status check and that if any problems
are discovered an alarm is indicated and operation is disabled.
Cooking involves running the microwave generator for the
specified time; on completion, a buzzer is sounded. If the door is
opened during operation, the system moves to the disabled state
Semantic data models
 Used to describe the logical structure of data processed by the
system. These are sometimes called semantic data models.
 An entity-relation-attribute model sets out the entities in the
system, the relationships between these entities and the entity
attributes
 Widely used in database design. Can readily be implemented
using relational databases.
 No specific notation provided in the UML but objects and
associations can be used.
 Figure 8.8 is an example of a data model that is part of the
Software Requirements / 67

library system LIBSYS introduced in earlier chapters. Recall that


LIBSYS is designed to deliver copies of copyrighted articles that
have been published in magazines and journals and to collect
payments for these articles. Therefore, the data model must
include information about the article, the copyright holder and
the buyer of the article. I have assumed that payments for
articles are not made directly but through national copyright
agencies.
 A figure 8.6 show that an Article has attributes representing the
title, the authors, the name of the PDF file of the article and the
fee payable. This is linked to the Source, where the article was
published, and to the Copyright Agency for the country of
publication. Both Copyright Agency and Source are linked to
Country. The country of publication is important because
copyright laws vary by country. The diagram also shows that
Buyers place Orders for Articles.

Fig 8.6: Attributes of a Class for the Copyright Agency.

 Like all graphical models, data models lack detail, and you
should maintain more detailed descriptions of the entities,
relationships and attributes that are included in the model. We
may collect these more detailed descriptions in a repository or
68 / Introduction to Software Engineering

data dictionary.
 Data dictionaries are generally useful when developing system
models
 They may be used to manage all information from all types of
system models.
 A data dictionary is simplistically, an alphabetic list of the names
included in the system models. As well as the name, the
dictionary should include an associated description of the named
entity and, if the name represents a composite object, a
description of the composition. Other information such as the
date of creation. The creator and the representation of the entity
may also be included depending on the type of model being
developed.
The advantages of using a data dictionary are:
 It is a mechanism for name management. Many people may have
to invent names for entities and relationships when developing a
large system model. These names should be used consistently
and should not clash. The data dictionary software can check for
same uniqueness where necessary and warn requirements analysts
of name duplications.
 It serves as a store of organizational information. As the system
is developed, information that can link analysis, design,
implementation and evolution is added to the data dictionary, so
that all information about an entity is in one place.
Software Requirements / 69

Object models
 Object models describe the system in terms of object classes and
their associations.
 An object class is an abstraction over a set of objects with
common attributes and the services (operations) provided by
each object.
 Objects are executable entities with the attributes and services of
the object class. Objects are instantiations of the object class, and
many objects may be created from a class.
 In object-oriented requirements analysis, we should model real-
world entities
 Using object classes.
 Various object models may be produced
o Inheritance models;
o Aggregation models;
o Interaction models.
 An object class in UML, as illustrated in the examples in Figure
8.10, is represented as a vertically oriented rectangle with three
70 / Introduction to Software Engineering

sections:
o The name of the object class is in the top section.
o The class attributes are in the middle section.
o The operations associated with the object class are in the
lower section of the rectangle.
 Object class identification is recognized as a difficult process
requiring a deep understanding of the application domain
Inheritance Models
Object-oriented modeling involves identifying the classes of object
that are important in the domain being studied. These are then
organized into taxonomy. Taxonomy is a classification scheme that
shows how all object class is related to other classes through
common attributes and services.
The classes are organized into an inheritance hierarchy with the most
general object classes at the top of the hierarchy. More specialized
objects inherit their attributes and services. These specialized objects
may have their own attributes and services.
 Figure 8.7 illustrates part of a simplified class hierarchy for a
model of a library.
 This hierarchy gives information about the items held in the
library. The library holds various items, such as books, music,
recordings of films, magazines and newspapers. In Figure 8.10,
the most general item is at the top of the tree and has a set of
attributes and services that are common to all library items.
These are inherited by the classes Published item and Recorded
item, which add their own attributes that are then inherited by
lower-level items.
Software Requirements / 71

Fig 8.7: Class hierarchy for the model for a library


Multiple inheritance models may also be constructed where a class
has several parents. Its inherited attributes and services are a
conjunction of those inherited from each super-class. Figure 8.8
shows an example of a multiple inheritance model that may also be
part of the library model.
72 / Introduction to Software Engineering

Fig: 8.8: Example of Multiple Inheritance


Object aggregation
 An aggregation model shows how classes that are collections are
composed of other classes.
 Aggregation models are similar to the part-of relationship in
semantic data models.
 In Figure 8.13. I have modeled a library item, which is a study
pack for a university course. This study pack includes lecture
notes, exercises, sample solutions, copies of transparencies used
in lectures, and videotapes.
 The UML notation for aggregation is to represent the
composition by including a diamond shape on the source of the
link. Therefore, Figure 8.9 can be read as 'A study pack is
composed of one of more assignments, OHP slide packages,
lecture notes and videotapes
Software Requirements / 73

Fig: 8.9: Illustrates Aggregation for Classes


Object behaviour modelling
 To model the behaviour of objects, you have to show how the
operations provided by the objects are used. In the UML, you
model behaviour using scenarios that are represented as UML
use-cases
 Sequence diagrams (or collaboration diagrams) in the UML are
used to model interaction between objects.
 For example, imagine a situation where the study packs shown in
Figure 8.9 could be maintained electronically and downloaded
to the student's computer.
74 / Introduction to Software Engineering

Fig 8.10: Sequence Diagram for Library System


In a sequence diagram, objects and actors are aligned along the top
of the diagram. Labeled arrows indicate operations; the sequence of
operations is from top to bottom. In this scenario, the library user
accesses the catalogue to see whether the item required is available
electronically; if it is, the user requests the electronic issue of that
item. For copyright reasons, this must be licensed so there is a
transaction between the item and the user where the license is
agreed. The item to be issued is then sent to a network server object
for compression before being sent to the library user.

2.18. STRUCTURED METHODS

 A structured method is a systematic way of producing models of


an existing system or of a system that is to be built. They were
first developed in the 1970s to support software analysis and
design
 Methods define a set of models, a process for deriving these
Software Requirements / 75

models and rules and guidelines that should apply to the models.
 CASE tools support system modelling as part of a structured
method.
 Structured methods have been applied successfully in many large
projects. However, structured methods suffer from a number of
weaknesses:
o They do not model non-functional system requirements.
o They do not usually include information about whether a
method is appropriate for a given problem.
o They may produce too much documentation.
o The system models are sometimes too detailed and
difficult for users to understand
 A coherent set of tools that is designed to support related
software process activities such as analysis, design or testing.

Fig 8.11: Software activities


 Diagram editors used to create object models, data models, and
behavioral models and so on. These editors are not just drawing
tools but are aware of the types of entities in the diagram. They
capture information about these entities and save this
information in the central repository.
 Design analysis and checking tools that process; the design and
76 / Introduction to Software Engineering

report on error and anomalies. These may be integrated with the


editing system so that user errors are trapped at an early stage in
the process.
 Repository query languages that allow the designer to find
designs and associated design information in the repository.
 A data dictionary that maintains information about the entities
used in a system design.
 Report definition and generation tools that take information
from the central store and automatically generate system
documentation.
 Forms definition tools that allow screen and document formats
to be specified.
 Import/export facilities that allow the interchange of information
from the central repository with other development tools.
 Code generators that generate code or code skeletons
automatically from the design captured in the central store.


CHAPTER 3
Design Engineering
KEY CONCEPTS
3.1. DESIGN PROCESS AND DESIGN QUALITY ........................ 77
3.2. DESIGN MODELS .............................................................. 82
3.3. SEVERE SYSTEM FAILURE -- 14A ...................................... 97

3.1. DESIGN PROCESS AND DESIGN QUALITY

 Encompasses the set of principles, concepts and practices that


lead to the development of high quality system or product
 Design creates a representation or model of the software
 Design model provides details about S/W architecture, interfaces
and components that are necessary to implement the system
 Quality is established during Design
 Design should exhibit firmness, commodity and design
 Design sits at the kernel of S/W Engineering
 Design sets the stage for construction
78 / Introduction to Software Engineering

McGlaughlin suggests three characteristics for the evaluation of a


good design
 The design must implement all of the explicit requirements
contained in the analysis model, and it must accommodate all of
the implicit requirements desired by the customer.
 The design must be a readable, understandable guide for those
who generate code and for those who test and subsequently
support the software.
 The design should provide a complete picture of the software.
Quality Guidelines (What are the Characteristics of a good design?)
In order to evaluate the quality of a design representation, we must
establish technical criteria for good design.
 A design should exhibit an architecture that (1) has been created
using recognizable architectural styles or patterns, (2) is
composed of components that exhibit good design characteristics
and (3) can be implemented in an evolutionary fashion
 A design should be modular; that is, the software should be
logically partitioned into elements or subsystems
 A design should contain distinct representations of data,
architecture, interfaces, and components.
 A design should lead to data structures that are appropriate for
the classes to be implemented and are drawn from recognizable
data patterns.
 A design should lead to components that exhibit independent
functional characteristics.
 A design should lead to interfaces that reduce the complexity of
connections between components and with the external
environment.
 A design should be derived using a repeatable method that is
driven by information obtained during software requirements
analysis.
 A design should be represented using a notation that effectively
communicates its meaning.
Design Engineering / 79

Quality Attributes
Hewlett-Packard developed a set of software quality attributes that
has been given the acronym FURPS-functionality, usability,
reliability, performance and supportability.
 Functionality: It is assessed by evaluating the capabilities of the
program generality of the functions that are delivered, and the
security of the overall system.
 Usability: Assessed by considering human factors, consistency
&aesthetics (appearance).
 Reliability: Evaluated by frequency &severity of errors, Mean
Time to Failure & ability to recover from failure.
 Performance: Measured by processing speed, response time,
resource consumption, throughput & efficiency
 Supportability: Combines ability to extend the program,
adaptability & the ease with which the system can be installed.
Design Concepts:
 Abstractions
 Architecture
 Patterns
 Modularity
 Information Hiding
 Functional Independence
 Refinement
 Re-factoring
 Design Classes
Abstraction:
 Many levels of abstraction
 Highest level of abstraction : Solution is slated in broad terms
using the language of the problem environment
 Lower levels of abstraction : More detailed description of the
solution is provided
 Procedural abstraction: Refers to a sequence of instructions that
has a specific and limited function. Ex The word open of a door
80 / Introduction to Software Engineering

which implies a long sequence of procedural steps (e.g., walk to


the door, grasp the knob, pull the door& step away from
moving door etc)
 Data abstraction: Named collection of data that describe a data
object (Ex: door would encompass a set of attributes like door
type, weight, dimensions etc.
Architecture:
 Architecture is the structure or organization of program
components, the manner in which these components interact,
the structure of the data that are used by components.
 Architectural design can be represented using one or more of a
no of different models. Structural Models: An organized
collection of program components
o Framework Models: Identifies repeatable architectural
design frameworks that are encountered in similar types
of applications.
o Dynamic Models : Represents the behavioral aspects
indicating changes as a function of external events
o Process Models: Focus on the design of the business or
technical process
o Functional Models: Used to represent functional
hierarchy of a system.
Patterns
 Design pattern describes design structure that solves a particular
problem.
Provides a description to enables a designer to determine the
followings:
 Whether the pattern is applicable to the current work
 Whether the pattern can be reused
 Whether the pattern can serve as a guide for developing a
similar but functionally or structurally different pattern
Modularity
 Divides software into separately named and addressable
Design Engineering / 81

components, sometimes called modules


 Consider two problems p1& p2 if complexity of p1 is greater
than complexity of p2 then the effort required to solve p1 is
greater than p2.
 Based on Divide and Conquer strategy : it is easier to solve a
complex problem when broken into sub-modules
 As the no of modules increases cost/module decreases
 As the no of modules increases cost to integrate increases
 Because of modularity testing and debugging can be done more
efficiently.
Information Hiding
 Information contained within a module is inaccessible to other
modules who do not need such information
 Achieved by defining a set of Independent modules that
communicate with one another only that information necessary
to achieve S/W function
 Provides the greatest benefits when modifications are required
during testing and later
 Errors introduced during modification are less likely to propagate
to other location within the S/W
Functional Independence
 A direct outgrowth of Modularity. Abstraction and information
hiding.
 Design a software so that each module addresses a specific sub
function of requirements & has a simple interface when viewed
from other parts of program structure.
 Functional Independence is assessed using two conditions
o Cohesion: Relative functional strength of a module.
o Coupling: Indication of relative interdependence among
modules.
Refinement
 Process of elaboration from high level abstraction to the lowest
level abstraction
82 / Introduction to Software Engineering

 High level abstraction begins with a statement of functions


 Refinement causes the designer to elaborate providing more and
more details at successive level of abstractions
 Abstraction and refinement are complementary concepts.
Refactoring
 Refactoring is a reorganization technique that simplifies design
(or code) of a component without changing its function or
behavior.
 Fowler defined that refactoring is a process of changing software
system in such a way that it does not alter external behavior of
code yet improves its internal structure.
 When software is refactored existing design is examined for
redundancy, unused design elements, inefficient algorithms,
inappropriate data structures or any other design failures.

3.2. DESIGN MODELS

DATA DESIGN ELEMENTS: Data design creates a model of data


and/or information that is represented at a high level of abstraction.
 The structure of data has always been an important part of
software design
 At the architectural (application) level, data design focuses on
translation of data model into a database.
 At the program component level, data design focuses on data
structures that are required to implement the local data objects
 At the business level, the collection of information stored in
disparate databases and recognized into a “data warehouse”
which enables data mining.
Architectural Design Elements
 The architectural design for software is equivalent to floor plan
of a house. Floor plan gives us an overall view of house.
 Architectural design elements give us an overall view of software.
 The architectural model is derived form 3 sources:
o Information about application domain for software to be
Design Engineering / 83

built.
o Specific analysis model elements such as Data Flow
Diagram or analysis classes.
o Availability of architectural styles and patterns.
Interface Design Elements
 The interface design for software is equivalent to set of detailed
drawings for doors, windows and external utilities of a
house.They tell us how information flow into and out of house
and within rooms that are part of house.
 The interface design elements for software tell how information
flows into and out of the system and how it is communicated
among components defined as part of architecture.
 There are three important elements of interface design
o The user interface
o External interfaces to other systems, devices, networks,
or other consumers of information
o Internal interfaces between design components
 The design of UI incorporates
o Aesthetic elements(layout, color, graphics, interaction
mechanisms)
o Ergonomic elements (information layout and placement,
metaphors)
o Technical elements (UI patterns, reusable components)
 The design of external interfaces requires definitive information
about the entity to which information is sent or received.
 The design of internal interfaces is closely aligned with
component-level design
 UML defines an interface in the following manner: An interface
is a specifier for the externally visible [public] operations of a
class.
o For ex Safe Home Security function makes use of control
panel that allows homeowner to control certain aspects
of security function. Control panel functions may be
84 / Introduction to Software Engineering

implemented via a wireless PDA or mobile phone.

 The Control Panel class of fig. Provides behavior associated with


a keypad and therefore it must implement operations read
keystroke () and decode key () .If these operations are to be
provided to other classes it is useful to define an interface as
shown in fig.
 The interface named keypad is shown as an <<interface>>
stereotype or a small, labeled circle connected to class with a
line. The interface is defined with no attributes. But represent a
set of operations necessary to achieve behavior of keypad.
 The dashed line with an open triangle at its end indicates that
control panel class provides keypad operations as part of its
behavior. In UML this is characterized as a realization.
Design Engineering / 85

Component-Level Design Elements:


 The component-level design for software is equivalent to set of
detailed drawing is for each room in a house. These drawings
depict wiring and plumbing within each room, location of
switches, showers, drains etc. They also describe flooring to be
used and every other detailed associated with a room.
 Component-level design for software fully describes the internal
detail of each software component.
 To accomplish this, component-level design describes data
structures for all local data objects and algorithmic detail for all
processing that occur within a component and an interface that
allows access to all component operations.
 In UML the diagrammatic form as shown in fig. .The
component, named sensor management is represented. A dashed
arrow connects the component to a class named sensor. The
sensor management component performs all functions associated
with Safe home sensors.

 The design details can be modeled at different levels of


abstraction. An activity diagram can be used to represent
processing logic. Detailed procedural flow of a component can
be represented using either pseudo-code or some diagrammatic
form.
Deployment-Level Design Elements
 They indicate how software functionality and subsystems will be
allocated within computing environment that will support the
software.
 For Ex the elements of Safe home product are configured to
operate with three primary computing environment.-A home-
86 / Introduction to Software Engineering

based PC, Safe home control panel and a server housed at CPI
corp.

 In UML Deployment diagram is developed as shown in fig. The


subsystems housed within each computing element are indicated
 The diagram shown in fig. is in descriptor form. This means
deployment diagram shows computing environment but does not
explicitly indicate configuration details. For ex PC could be a
Macintosh, or a sun workstation or a linux. These details are
provided when deployment diagram is revisited during later
stages of design or as construction begins.
Performing User Interface Design
 User interface design creates an effective communication
medium between a human and a computer
 The design should be easy to learn, easy to use & easy to
understand.
Design Engineering / 87

Typical Design Errors


 Lack of consistency
 Too much memorization
 No guidance / help
 No context sensitivity
 Poor response
 Unfriendly
The Golden Rules
Theo Mandel coins three gold rules
 Place the user in control
 Reduce the user’s memory load
 Make the interface consistent
Place the user in control
 Define interaction modes in a way that does not force a user into
unnecessary or undesired actions: An interaction mode is current
state of interface. For ex if spell check is selected in a word
processor menu, the software moves to a spell checking mode
.There is no reason to force the user to remain in spell check
mode if the user desires to make text editing.
 Provide for flexible interaction: Software might allow a user to
interact via keyboard commands, mouse movements, digitizer
pen, voice commands etc. Every action is not amenable to every
interaction mechanism. Consider the difficulty of using key board
commands (or voice input) to draw a complex shape.
 Allow user interaction to be interruptible and undoable: Even
when involved in sequence of actions the user should be able to
interrupt the sequence. The user should be able to undo any
action.
 Streamline interaction as skill levels advance and allow the
interaction to be customized : Design a macro mechanism that
enables an advanced user to customize the interface to facilitate
interaction
 Hide technical internals from the casual user. The user should
88 / Introduction to Software Engineering

not be aware of operating system, file management functions, or


other computing technology.
 Design for direct interaction with objects that appear on the
screen: The user must be able to manipulate the objects when
necessary. For ex an interface that allows a user to stretch an
object is an implementation of direct manipulation.
Reduce the Users Memory Load
 Reduce demand on short-term memory: The interface should be
designed to reduce requirement to remember past actions and
results. This can be accomplished by providing visual clues that
enables a user to recognize past actions and results, rather than
having to recall them
 Establish meaningful defaults : The initial set of defaults should
make sense for avg user, but a user should be able to specify
individual preferences
 Define shortcuts that are intuitive :When mnemonics are used to
accomplish a system function, the mnemonic should be tied to
action in a way that is easy to remember.(For ex ALT-P to
invoke print function)
 The visual layout of the interface should be based on a real
world metaphor :
For ex a bill payment system should use a checkbook and check
register to guide through bill paying process.
 Disclose information in a progressive fashion: The interface shall
be organized hierarchically. For ex underline in word processor is
in text style menu. After selecting underline many underline
options are presented.
Make the Interface Consistent
 Allow the user to put the current task into a meaningful context.
Many interfaces implement complex layers of interactions with
dozens of screen images. It is important to provide indicators
(e.g., window titles, graphical icons, consistent color-coding) that
enable the user to know the context of the work at hand.
 Maintain consistency across a family of applications. A set of
Design Engineering / 89

applications (or products) should all implement the same design


rules so that consistency is maintained for all interaction.
 If past interactive models have created user expectations, do not
make changes unless there is a compelling reason to do so. Once
a particular interactive sequence has become a de facto standard
(e.g., the use of alt-S to save a file), the user expects this in
every application he encounters. A change (e.g., using alt-S to
invoke scaling) will cause confusion.
User Interface Analysis and Design
 The overall process for analyzing and designing a user interface
begins with the creation of different models of system function.
 The human and computer-oriented tasks that are required to
achieve system function are then delineated; design issues that
apply to all interface designs are considered; are and the result is
evaluated for quality
Interface Analysis & Design Models
 There are 4 different models that are to be considered when a
user interface is to be analyzed and designed.
 User model Establishes a profile of all end users of the system
Users can be categorized as
 Novice – No syntactic knowledge of the system and little
semantic knowledge of the application or computer usage of the
system
 Knowledgeable, intermittent users – Reasonable semantic
knowledge of the application but low recall of syntactic
information to use the system
 Knowledgeable, frequent users – Good semantic and syntactic
knowledge
 Design model – A design model of the entire system
incorporates data, architectural, interface and procedural
representation of the software
A design realization of the user model
 User’s Mental model (system perception) the user’s mental
90 / Introduction to Software Engineering

image of what the interface is


 Implementation model the interface “look and feel” coupled
with supporting information that describe interface syntax and
semantics
When the implementation model and user’s mental model coincide,
users generally feel comfortable with the software and use it
effectively.
User interface analysis and design process
 The user interface analysis and design process is an iterative
process and it can be represented as a spiral model
 It consists of 4 framework activities
o User, task and environment analysis
o Interface design
o Interface construction
o Interface validation

Interface analysis
 Understanding the user who interacts with the system based on
their skill levels.
 The task the user performs to accomplish the goals of the system
are identified, described and elaborated.
Design Engineering / 91

Interface design
 In interface design, all interface objects and actions that enable a
user to perform all desired task are defined
Implementation
 A prototype is initially constructed and then later user interface
development tools may be used to complete the construction of
the interface.
Validation
 The correctness of the system is validated against the user
requirement
Interface Analysis
 Interface analysis means understanding
o the people (end-users) who will interact with the system
through the interface;
o the tasks that end-users must perform to do their work,
o the content that is presented as part of the interface
o The environment in which these tasks will be conducted.
User Analysis
The ways to learn what the users want from UI?
 User Interviews: Interviews involve representatives from software
team who meet with end-users to better understand their needs
 Sales Input: Sales people meet with customers and users on
regular basis and gather information that will help software team
to categorize users.
 Support Input : Support staff talk with users on daily basis
o The following set of questions will help interface
designers better understand the users of a system. Are
users trained professionals, technician, clerical, or
manufacturing workers?
o What level of formal education does the average user
have?
o Are the users capable of learning from written materials
or have they expressed a desire for classroom training?
92 / Introduction to Software Engineering

o Are user’s expert typists or keyboard phobic?


o What is the age range of the user community?
o Will the users be represented predominately by one
gender?
o How are users compensated for the work they perform?
o Do users work normal office hours or do they work until
the job is done?
o What is the primary spoken language among users?
o What are the consequences if a user makes a mistake
using the system?
o Are user’s experts in the subject matter that is
addressed by the system?
o Do users want to know about the technology the sits
behind the interface?
Task Analysis and Modeling
The goal of task analysis is to answer the following questions.
 What work will the user perform in specific circumstances?
 What tasks and subtasks will be performed as the user does the
work?
 What specific problem domain objects will the user manipulate as
work is performed?
 What is the sequence of work tasks—the workflow?
 What is the hierarchy of tasks?
Analysis Techniques
 Use-cases define basic interaction. When used as part of task
analysis the use-case is developed to show how an end-user
performs some specific work-related task.
 Task elaboration refines interactive tasks
Regardless of the overall approach to task analysis, human engineers
must rst de ne and classify tasks.
For example, assume that a small software company wants to build a
computer-aided design system explicitly for interior designers. By
Design Engineering / 93

observing an interior designer at work, the engineer notices that


interior design comprises a number of major activities: furniture
layout, fabric and material selection, wall and window coverings
selection, presentation (to the customer), costing, and shopping.
Each of these major tasks can be elaborated into subtasks. For
example, furniture layout can be refined into the following tasks: (1)
draw a floor plan based on room dimensions;(2) place windows and
doors at appropriate locations; (3) use furniture templates to draw
scaled furniture outlines on floor plan; (4) move furniture outlines to
get best placement; (5) label all furniture outlines; (6) draw
dimensions to show location; (7) raw perspective view for customer.
Subtasks 1–7 can each be refined further.
 Object elaboration identifies interface objects (classes) for ex the
furniture template might translate into a class called Furniture
with attributes that might include size, shape, location and
others.
 Workflow analysis defines how a work process is completed when
several people (and roles) are involved
94 / Introduction to Software Engineering

Analysis of Display Content


During this interface analysis step, the format and aesthetics of the
content are considered. Among the questions that are asked and
answered are
 Are different types of data assigned to consistent geographic
locations on the screen (e.g., photos always appear in the upper
right hand corner)?
 Can the user customize the screen location for content?
 Is proper on-screen identification assigned to all content?
 If a large report is to be presented, how should it be partitioned
for ease of understanding?
 Will mechanisms be available for moving directly to summary
information for large collections of data?
 Will graphical output be scaled to fit within the bounds of the
Design Engineering / 95

display device that is used?


 How will color to be used to enhance understanding?
 How will error messages and warning be presented to the user?
Analysis of work environment
 Analysis of work environment focuses on
o Where will the interface be located physically?
o Will the user be sitting, standing, or performing other
tasks unrelated to the interface?
o Does the interface hardware accommodate space, light,
or noise constraints?
o Are there special human factors considerations driven by
environmental factors?
Interface Design Steps
Once interface analysis has been completed, all tasks required by the
end-user have been identified in detail, and the interface design
activity commences. Interface design, like all software engineering
design, is an iterative process.
Although many different user interface design models have been
proposed, all suggest some combination of following steps
 Using information developed during interface analysis define
interface objects and actions (operations).
 Define events (user actions) that will cause the state of the user
interface to change. Model this behavior.
 Depict each interface state as it will actually look to the end-user.
 Indicate how the user interprets the state of the system from
information provided through the interface.
Applying Interface Design Steps
An important step in interface design is definition of interface objects
& actions that are applied to them. Nouns (objects) & Verbs
(actions) are isolated to create list of objects & actions.
Once objects & actions have been identified, they are categorized by
type.
96 / Introduction to Software Engineering

Target, source & application objects. A source object (report) is


dragged & dropped into a target object (print).The implication of
this action is to create a hard-copy report.
An application Object represents application specific data that are
not directly manipulated as part of screen interaction (Ex a mailing
list store name of mailing list itself might be sorted, merged, but it is
not dragged & dropped via user interaction)
When designer is satisfied that all important objects & actions have
been identified, screen layout is performed. Screen layout is an
iterative process in which graphical design & placement of icons,
tiling of windows, and definition of major & minor menu items is
conducted.
Interface Design Patterns
 Patterns are available for the complete UI, Page layout, Forms
and input, Tables, Direct data manipulation, Navigation,
Searching, page elements &e-commerce.
Design Issues
 Response time
 Help facilities
 Error handling
 Menu and command labeling
 Application accessibility
 Internationalization
Response time: System response time is the primary complaint for
many interactive applications. In general, system response time is
measured from the point at which the user performs some control
action (e.g., hits the return key or clicks a mouse) until the software
responds with desired output or action.
System response time has two important characteristics: length and
variability.
The length of system response is too long, user frustration and stress
is the inevitable.
Design Engineering / 97

Variability refers to the deviation from average response time, and in


many ways, it is the most important response time characteristic.
Low variability enables the user to establish an interaction rhythm,
even if response time is relatively long. For example, a 1-second
response to a command is preferable to a response that varies from
0.1 to 2.5 seconds.
Help facilities: Almost every user of an interactive, computer-based
system requires help now and then. However, modern software
provides on-line help facilities that enable a user to get a question
answered or resolve a problem without leaving the interface.
Two different types of help facilities are encountered: integrated and
add-on
An integrated help facility is designed into the software from the
beginning.
An add-on help facility is added to the software after the system has
been built.
A number of design issues must be addressed when a help facility is
considered:
 Will help be available for all system functions and at all times
during system interaction?
 How will the user request help?
 How will help be represented?
 How will the user return to normal interaction?
 How will help information be structured?
Error handling: Error messages and warnings are "bad news"
delivered to users of interactive systems when something has gone
awry. Here are few computer users who have not encountered an
error of the form:

3.3. SEVERE SYSTEM FAILURE -- 14A

Somewhere, an explanation for error 14A must exist; otherwise,


why would the designers have added the identification? Yet, the
error message provides no real indication of what is wrong. Such
98 / Introduction to Software Engineering

type of messages turn user into frustration & anxiety.


In general, every error message or warning produced by an
interactive system should have the following characteristics:
 The message should describe the problem in jargon that the user
can understand.
 The message should provide constructive advice for recovering
from the error.
 The message should indicate any negative consequences of the
error (e.g., potentially corrupted data ales) so that the user can
check to ensure that they have not occurred (or correct them if
they have).
 The message should be accompanied by an audible or visual cue.
That is, a beep might be generated to accompany the display of
the message, or the message might ash momentarily or is
displayed in a color that is easily recognizable as the "error
color."
 The message should be "nonjudgmental." That is, the wording
should never place blame on the user
Menu and command labeling: the typed command was once the
most common mode of interaction between user and system
software and was commonly used for applications of every type.
A number of design issues arise when typed commands are provided
asa mode of interaction:
 Will every menu option have a corresponding command?
 What form will commands take? Options include a control
sequence (e.g., alt-P), function keys, or a typed word.
 How difficult will it be to learn and remember the commands?
What can be done if a command is forgotten?
 Can commands be customized or abbreviated by the user?
Application accessibility
As computing applications become ubiquitous, software engineers
must ensure that interface design encompasses mechanisms that
enable easy access for those with special needs. Accessibility for users
Design Engineering / 99

who may be physically challenged is an imperative for moral, legal


and business reasons
Internationalization
Too often interfaces are designed for one locale and language and
then jury-rigged to work in other countries. The challenge for
interface designers is to create globalized software. Varieties of
international guidelines are available to software engineers. These
guidelines address broad design issues (ex screen layouts) and
discrete implementation issues (ex different alphabets create
specialized labeling and spacing requirements).
Design Evaluation
Once an operational user interface prototype has been created, it
must be evaluated to determine whether it meets the needs of the
user.
The user interface evaluation cycle takes the form shown in Figure.
After the design model has been completed, a first-level prototype is
created. The prototype is evaluated by the user, who provides the
designer with direct comments about the efficacy of the interface.
Design modifications are made based on user input and the next level
prototype is created. The evaluation cycle continues until no further
modifications to the interface design are necessary.
Once the first prototype is built, the designer can collect a variety of
qualitative and quantitative data that will assist in evaluating the
interface. To collect qualitative data, questionnaires can be
distributed to users of the prototype. Questions can be all (1) simple
yes/no response, (2) numeric response, (3) scaled (subjective)
response, or (4) percentage (subjective) response.
100 / Introduction to Software Engineering

3.4. OBJECT-ORIENTED DESIGN

An object-oriented system is made up of interacting objects that


maintain their own local state and provide operations on that state
Object-oriented design processes involve designing object classes and
the relationships between these classes.
Object-oriented design is part of object-oriented development where
an object-oriented strategy is used throughout the development
process:
 OOA is concerned with developing an object model of the
application domain.
Design Engineering / 101

 OOD is concerned with developing an object-oriented system


model to implement requirements.
 OOP is concerned with realizing an OOD using an OO
programming language such as Java or C++.
Characteristics of OOD
 Objects are abstractions of real-world or system entities and
manage themselves.
 Objects are independent
 System functionality is expressed in terms of object services.
 Objects communicate by message passing.
 Objects may be distributed and may execute sequentially or in
parallel
Advantages of OOD
 Easier maintenance. Objects may be understood as stand-alone
entities.
 Objects are potentially reusable components
Objects & Object Classes
 Object: An object is an entity that has a state and a defined set
of operations that operate on that state. The state is represented
as a set of object attributes. The operations associated with the
object provide services to other objects (clients) which request
these services when some computation is required.
 Object class: Object classes are templates for objects. They are
used to create objects. It includes declaration of all the
attributes and operations that are associated with object of that
class.
In the UML, an object class is represented as a named rectangle with
two sections. The object attributes are listed in the top section. The
operations that are associated with the object are set out in the
bottom section. Figure Illustrates this notation using an object class
that models an employee in an organization. The UML uses the term
operation to mean the specification of an action; the term method is
used to refer to the implementation of an operation.
102 / Introduction to Software Engineering

The ellipsis (...) indicates that there are more attributes associated
with the class than are shown.
Object communication
Conceptually, objects communicate by message passing.
Messages
 The name of the service requested by the calling object;
 Copies of the information required to execute the service and
the name of a holder for the result of the service.
Message examples
// Call a method associated with a buffer
object that returns the next value in the
buffer
v = circularBuffer.Get ();
// Call the method associated with a
thermostat object that sets the temperature
to be maintained
thermostat.setTemp (20) ;
Design Engineering / 103

Generalization or inheritance hierarchy shows the relationship


between object classes. Classes are arranged in a class hierarchy
where one class (a super-class) is a generalization of one or more
other classes (sub-classes).A sub-class inherits the attributes and
operations from its super class and may add new methods or
attributes of its own. Generalization in the UML is implemented as
inheritance in OO programming languages.

Problems with inheritance


Object classes are not self-contained. They cannot be understood
without reference to their super-classes.
UML associations
Objects and object classes participate in relationships with other
objects and object classes. Associations may be annotated with
information that describes the association. Associations are general
104 / Introduction to Software Engineering

but may indicate that an attribute of an object is an associated object


or that a method relies on an associated object.

Concurrent Objects
In practice, most object-oriented programming languages have as
their default a serial execution model where requests for object
services are implemented in the same way as function calls.
Therefore, when an object called the List is created from a normal
object class, you write in Java:
theList.append (17)
This calls the append method associated with the List object to add
the element 17 to the List, and execution of the calling object is
suspended until the append operation has been completed.
However, Java includes a very simple mechanism (threads) that lets
you create objects that execute concurrently. Threads are created in
Java by using the built-in Thread class as a parent class in a class
declaration. Threads must include a method called run, which is
started by the Java run-time system when objects that are defined as
threads are created. It is therefore easy to take an object-oriented
design and produce an implementation where the objects are
concurrent processes.
There are two kinds of concurrent object implementation:
1. Servers where the object is realized as a parallel process with
methods corresponding to the defined object operations. Methods
start up in response to an external message and may execute in
parallel with methods associated with other objects. When they have
completed their operation, the object suspends itself and waits for
Design Engineering / 105

further requests for service.


2. Active objects where the state of the object may be changed by
internal operations executing within the object itself. The process
representing the object continually executes these operations so
never suspends itself.
Servers are most useful in a distributed environment where the
calling and the called object may execute on different computers.
The response time for the service that is requested is unpredictable,
system so the object that has requested a service does not have to
wait for that service to be completed. They can also be used in a
single machine where a service takes some time to complete (e.g.,
printing a document) and several objects may request the service.
Active objects are used when an object needs to update its own state
at specified intervals. Figure shows how an active object may be
defined and implemented in Java.
The object class represents a transponder on an aircraft. The
transponder keeps track of the aircraft's position using a satellite
navigation system. It can respond to messages from air traffic control
computers. It provides the current aircraft position in response to a
request to the give Position method. This object is implemented as a
thread where a continuous loop in the run method includes code to
compute the aircraft's position using signals from satellites.
106 / Introduction to Software Engineering

Object Oriented Design Process


The general process that is used for object-oriented design has a
number of process stages:
 Define the context and modes of use of the system;
 Design the system architecture;
 Identify the principal system objects;
 Develop design models;
 Specify object interfaces.
Weather station description (Let us consider this example for OOD)
A weather station is a package of software-controlled instruments,
which collects data, performs some data processing and transmits this
data for further processing. The instruments include air and ground
Design Engineering / 107

thermometers, an anemometer, a wind vane, a barometer and a rain


gauge. Data is collected periodically.
When a command is issued to transmit the weather data, the
weather station processes and summarizes the collected data. The
summarized data is transmitted to the mapping computer when a
request is received.
System Context & models of use
The first stage in any software design process is to develop an
understanding of the relationships between the software that is being
designed and its external environment.
The system context and the model of system use represent two
complementary models of the relationships between a system and its
environment:
 The system context is a static model that describes the other
systems in that environment. Use a subsystem model to show
other systems.
108 / Introduction to Software Engineering

 The model of the system use is a dynamic model that describes


how the system actually interacts with its environment. Use use-
cases to show interactions a use-case model shows the system
features as ellipses and the interacting entity as a stick figure.
Design Engineering / 109

The use-case model for the weather station is shown in Figure. This
shows that weather station interacts with external entities for startup
and shutdown, for reporting the weather data that has been
collected, and for instrument testing and calibration.
Each of these use-cases can be described in structured natural
language. The use-case description helps to identify objects and
operations in the system.
110 / Introduction to Software Engineering

Architectural Design
Once interactions between the system and its environment have
been understood, you use this information for designing the system
architecture.
A layered architecture is appropriate for the weather station
 Interface layer for handling communications;
 Data collection layer for managing collection of data from
instruments and summarizing weather data before transmission to
mapping system;
 Instruments layer encapsulation of all instruments used to collect
raw data.
Good rule of thumb is that there should normally be no more than 7
entities in an architectural model.

Object Identification
 Identifying objects (or object classes) is the most difficult part of
object oriented design.
 There is no 'magic formula' for object identification. It relies on
the skill, experience
and domain knowledge of system designers.
 Object identification is an iterative process. You are unlikely to
Design Engineering / 111

get it right first time.


Approaches to identification
 Use a grammatical approach based on a natural language
description of the system Objects and attributes are nouns;
operations or services are verbs (used in Hood OOD method
widely used in European aerospace industry).
 Base the identification on tangible things in the application
domain. Such as aircraft, roles such as manager, events such as
request, interactions such as meetings, locations such as offices,
organizational units such as companies, and so on.
 Use a Behavioural approach and identify objects based on what
participates in what behaviour.
 Use a scenario-based analysis. The objects, attributes and
methods in each scenario are identified.
Weather station object classes
 Ground thermometer, Anemometer, Barometer: Application
domain objects that are ‘hardware’ objects related to the
instruments in the system.
 Weather station: The basic interface of the weather station to its
environment. It therefore reflects the interactions identified in
the use-case model.
 Weather data : Encapsulates the summarized data from the
instruments
112 / Introduction to Software Engineering

Fig 1: Examples of object classes in weather station system


Further objects and object refinement
 Use domain knowledge to identify more objects and operations
o Weather stations should have a unique identifier;
o Weather stations are remotely situated so instrument
failures have to be reported automatically. Therefore,
attributes and operations for self-checking are required.
 Active or passive objects
o In this case, objects are passive and collect data on
request rather than autonomously. This introduces
flexibility at the expense of controller processing time.
Design Models
Design models show the objects and object classes and relationships
between these entities. Design models essentially are the design.
They are the bridge between the requirements for the system and
the system implementation. An important step in the design process,
therefore, is to decide which design models that you need and the
level of detail of these models.
There are two types of design models that should normally be
produced to describe an object-oriented design:
Design Engineering / 113

 Static models describe the static structure of the system in terms


of object classes and relationships.
 Dynamic models describe the dynamic interactions between
objects
The UML provides for 12 different static and dynamic models that
may be produced to document a design.
 Subsystem models that show logical groupings of objects into
coherent sub-systems. These are represented using a form of
class diagram where each sub-system is shown as a package.
Subsystem models are static models.
 Sequence models that show the sequence of object interactions.
These are represented using a UML sequence or a collaboration
diagram. Sequence models are dynamic models.
 State machine models that show how individual objects change
their state in response to events. These are represented in the
UML using state chart diagrams. State machine models are
dynamic models.
Subsystem models
Shows how the design is organized into logically related groups of
objects. In the UML, these are shown using packages - an
encapsulation construct. This is a logical model. The actual
organization of objects in the system may be different
114 / Introduction to Software Engineering

Sequence Models:
Sequence models show the sequence of object interactions that take
place
 Objects are arranged horizontally across the top;
 Time is represented vertically so models are read top to bottom;
 Interactions are represented by labeled arrows, Different styles of
arrow represent different types of interaction;
 A thin rectangle in an object lifeline represents the time when
the object is the controlling object in the system.
Design Engineering / 115

State charts
The UML uses state charts, initially invented by Harel to describe
state machine models.
Figure is a state chart for the Weather Station object that shows how
it responds to requests for various services.
You can read this diagram as follows:
 If the object state is Shutdown then it can only respond to a
startup ( ) message. It then moves into a state where it is waiting
for further messages. The unlabeled arrow with the black blob
indicates that the Shutdown state is the initial state.
 In the Waiting state, the system expects further messages. If a
shutdown ( ) message is received, the object returns to the
shutdown state.
 If a reportWeather ( ) message is received, the system moves to
the Summarizing state. When the summary is complete, the
system moves to a Transmitting state where the information is
transmitted through the Comms Controller. It then returns to
the Waiting state.
 If a calibrate ( ) message is received, the system moves to the
116 / Introduction to Software Engineering

Calibrating state, then the Testing state, and then the


Transmitting state, before returning to the Waiting state If a test
( ) message is received, the system moves directly to the Testing
state.
 If a signal from the clock is received, the system moves to the
Collecting state, where it is collecting data from the instruments.
Each instrument is instructed in turn to collect its data.

Fig 2: State Diagram for weather station


Object Interface Specification
 Object interfaces have to be specified so that the objects and
subsystems can be designed in parallel.
 Designers should avoid designing the interface representation but
should hide this in the object itself.
 Objects may have several interfaces, which are viewpoints on the
methods provided.
 The UML uses class diagrams for interface specification but Java
may also be used.
Design Engineering / 117

The Java description can show that some methods can take different
numbers of parameters. Therefore, the shutdown method can be
applied either to the station as a whole if it has no parameters or to a
single instrument.
Design Evolution
 Hiding information inside objects means that changes made to an
object do not affect other objects in an unpredictable way.
 Assume pollution-monitoring facilities are to be added to
weather stations. These sample the air and compute the amount
of different pollutants in the atmosphere. Pollution readings are
transmitted with weather data.
Changes required
 Add an object class called Air quality as part of Weather Station.
 Add an operation reportAirQuality to WeatherStation. Modify
the control software to collect pollution readings.
 Add objects representing pollution monitoring instruments.
118 / Introduction to Software Engineering


CHAPTER 4
Testing Strategies
KEY CONCEPTS
4.1. A STRATEGIC APPROACH TO SOFTWARE TESTING ...... 119
4.2. TEST STRATEGIES FOR CONVENTIONAL SOFTWARE .... 123
4.3. WHITE-BOX TESTING ..................................................... 130
4.4. GRAPH MATRICES ......................................................... 135
4.5. BLACK-BOX TESTING ..................................................... 139
4.6. THE ART OF DEBUGGING .............................................. 146
4.7. PRODUCT METRICS ....................................................... 148
4.8. ISO 9126 QUALITY FACTORS ....................................... 150
4.9. A FRAMEWORK FOR PRODUCT METRICS ...................... 151
4.10. METRICS FOR ANALYSIS MODEL ................................... 154
4.11. METRICS FOR DESIGN MODEL ....................................... 157
4.12. CLASS-ORIENTED METRICS: THE MOOD METRICS SUITE161
4.13. METRICS FOR PROCESS AND PRODUCTS ....................... 164
4.14. SOFTWARE MEASUREMENT .......................................... 165

esting is the process of exercising a program with the specific

T intent of finding errors prior to delivery to the end user.


Testing shows errors, requirements conformance, performance
& indication of quality.

4.1. A STRATEGIC APPROACH TO SOFTWARE TESTING

Test Strategy incorporates test planning, test case design, test


execution and resultant data collection and execution.
All provide the software developer with a template for testing and all
have the following generic characteristics:
 Perform Formal Technical reviews(FTR) to uncover errors during
software development
 Begin testing at component level and move outward to
integration of entire component based system.
 Adopt testing techniques relevant to stages of testing
120 / Introduction to Software Engineering

 Testing can be done by software developer and independent


testing group
 Testing and debugging are different activities. Debugging follows
testing
Verification & Validation
 Verification refers to the set of activities that ensure that
software correctly implements a specific function.
 Validation refers to a different set of activities that ensure that
the software that has been built is traceable to customer
requirements.
Boehm [BOE81] states this way:
 Verification: "Are we building the product right?"
 Validation: "Are we building the right product?"
The definition of V&V encompasses many of the activities that we
have referred to as software quality assurance (SQA).
Verification and validation encompasses a wide array of SQA
activities that include formal technical reviews, quality and
configuration audits, performance monitoring, simulation, feasibility
study, documentation review, database review, algorithm analysis,
development testing, qualification testing, and installation testing.
Although testing plays an extremely important role in V&V, many
other activities are also necessary.
Organizing for Software Testing
The people who have built the software are now asked to test the
software. This seems harmless in itself; after all, who knows the
program better than its developers do? Unfortunately, these same
developers have a vested interest in demonstrating that the program
is error free, that it works according to customer requirements.
From a psychological point of view, software analysis and design
(along with coding) are constructive tasks. From the point of view of
the builder, testing can be considered to be (psychologically)
destructive.
The software developer is always responsible for testing the
Testing Strategies / 121

individual units (components) of the program, ensuring that each


performs the function for which it was designed. In many cases, the
developer also conducts integration testing—a testing step that leads
to the construction (and test) of the complete program structure.
Only after the software architecture is complete does an
independent test group become involved.
The role of an independent test group (ITG) is to remove the
inherent problems associated with letting the builder test the thing
that has been built. Independent testing removes the conflict of
interest that may otherwise be present.
However, the software engineer does not turn the program over to
ITG and walk way. The developer and the ITG work closely
throughout a software project to ensure that thorough tests will be
conducted. While testing is conducted, the developer must be
available to correct errors that are uncovered.
The ITG is part of the software development project team in the
sense that it becomes involved during the specification activity and
stays involved (planning and specifying test procedures) throughout a
large project.
A Software Testing Strategy for Conventional Software Architecture
The software engineering process may be viewed as the spiral
illustrated in Fig1. Initially, system engineering defines the role of
software and leads to software requirements analysis, where the
information domain, function, behavior, performance, constraints,
and validation criteria for software are established.

Fig: 1 Conventional software architecture


122 / Introduction to Software Engineering

Unit testing begins at the vortex of the spiral and concentrates on


each unit (i.e., component) of the software as implemented in
source code. Testing progresses by moving outward along with the
spiral to integration testing, where the focus is on design and the
construction of the software architecture. Taking another turn
outward on the spiral, we encounter validation testing, where
requirements established as part of software requirements analysis are
validated against the software that has been constructed. Finally, we
arrive at system testing, where the software and other system
elements are tested as a whole.
Unit testing makes heavy use of white-box testing techniques. Black-
box test case design techniques are the most prevalent during
integration, although a limited amount of white-box testing may be
used to ensure coverage of major control paths. Black-box testing
techniques are used exclusively during validation. Software, once
validated, must be combined with other system elements (e.g.,
hardware, people, and databases). System testing verifies that all
elements mesh properly and that overall system
function/performance is achieved.

Fig 2: Testing Strategy in Conventional Software


Testing Strategies / 123

4.2. TEST STRATEGIES FOR CONVENTIONAL SOFTWARE

Unit Testing

This is a fig just to understand unit testing


Unit testing focuses verification effort on the smallest unit of
software design—the software component or module. Using the
component-level design description as a guide, important control
paths are tested to uncover errors within the boundary of the
module. The unit test is white-box oriented.
Unit Test Considerations
The tests that occur as part of unit tests are illustrated schematically
in above figure3. The module interface is tested to ensure that
information properly flows into and out of the program unit under
test. The local data structure is examined to ensure that data stored
temporarily maintains its integrity during all steps in an algorithm's
execution. Boundary conditions are tested to ensure that the module
operates properly at boundaries established to limit or restrict
processing. All independent paths (basis paths) through the control
structure are exercised to ensure that all statements in a module have
been executed at least once. And finally, all error handling paths are
tested.
124 / Introduction to Software Engineering

Fig 3: Unit testing considerations


What errors are commonly found during Unit Testing?
(1) Misunderstood or incorrect arithmetic precedence,
(2) Mixed mode operations,
(3) Incorrect initialization,
(4) Precision inaccuracy,
(5) Incorrect symbolic representation of an expression. Comparison
and control flow are closely coupled to one another (i.e., change of
flow frequently occurs after a comparison).
Test cases should uncover errors such as
(1) Comparison of different data types,
(2) Incorrect logical operators or precedence,
(3) Expectation of equality when precision error makes equality
unlikely,
(4) Incorrect comparison of variables,
(5) Improper or nonexistent loop termination,
(6) Failure to exit when divergent iteration is encountered, and
(7) Improperly modified loop variables.
Testing Strategies / 125

Unit Test Procedures


After source level code has been developed, reviewed, and verified
for correspondence to component-level design, unit test case design
begins

Fig 4: Unit test Procedures


Because a component is not a stand-alone program, driver and/or
stub software must be developed for each unit test. The unit test
environment is illustrated in Figure 4. In most applications a driver is
nothing more than a "main program" that accepts test case data,
passes such data to the component (to be tested), and prints
relevant results. Stubs serve to replace modules that are subordinate
(called by) the component to be tested. A stub or "dummy
subprogram" uses the subordinate module's interface, may do
minimal data manipulation, prints verification of entry, and returns
control to the module undergoing testing.
Unit testing is simplified when a component with high cohesion is
designed.
Integration Testing
The objective is to take unit tested components and build a program
structure that has been dictated by design.
Options: "big bang" approach all components are combined in
126 / Introduction to Software Engineering

advance. The entire program is tested as a whole. Incremental


integration The program is constructed and tested in small
increments, where errors are easier to isolate and corrected.
Numbers of different incremental integration strategies are discussed.
Top-Down Integration
Modules are integrated by moving downward through the control
hierarchy, beginning with the main control module (main program).
Modules subordinate to the main control module are incorporated
into the structure in either a depth-first or a breadth-first manner.
Referring to fig 5 depth-first integration would integrate all
components on a major control path of the structure. Selection of a
major path and depends on application-specific characteristics. For
example, selecting the left hand path, components M1, M2, M5
would be integrated first. Next, M8 or (if necessary for proper
functioning of M2) M6 would be integrated. Then, the central and
right-hand control paths are built.

Fig 5: Depth-first integration


Breadth-first integration incorporates all components directly
subordinate at each level, moving across the structure horizontally.
From the figure 5, components M2, M3, and M4 (a replacement
for stub S4) would be integrated first. The next control level, M5,
Testing Strategies / 127

M6, and so on, follows.


Steps for Top-Down Integration:
 The main control module is used as a test driver and stubs are
substituted for all components directly subordinate to the main
control module.
 Depending on the integration approach selected (i.e., depth or
breadth first), subordinate stubs are replaced one at a time with
actual components.
 Tests are conducted as each component is integrated.
 On completion of each set of tests, another stub is replaced with
the real component.
 Regression testing may be conducted to ensure that new errors
have not been introduced.
The process continues from step 2 until the entire program structure
is built.
What problems are encountered when top-down integration strategy
is chosen?
The most common of these problems occurs when processing at low
levels in the hierarchy is required to adequately test upper levels.
Stubs replace low-level modules at the beginning of top-down
testing; therefore, no significant data can flow upward in the
program structure
Bottom-Up Integration
Bottom-up integration testing, as its name implies, begins
construction and testing with atomic modules (i.e., components at
the lowest levels in the program structure).
Steps for Bottom-Up Integration
 Low-level components are combined into clusters (sometimes-
called builds) that perform a specific software sub function.
 A driver (a control program for testing) is written to coordinate
test case input and output.
 The cluster is tested.
 Drivers are removed and clusters are combined moving upward
128 / Introduction to Software Engineering

in the program structure.

Fig 6: Bottom-Up Integration


Components are combined to form clusters 1, 2, and 3. Each of the
clusters is tested using a driver (shown as a dashed block).
Components in clusters 1 and 2 are subordinate to Ma. Drivers D1
and D2 are removed and the clusters are interfaced directly to Ma.
Similarly, driver D3 for cluster 3 is removed prior to integration with
module Mb. Both Ma and Mb will ultimately be integrated with
component Mc, and so forth.
Regression Testing
Each time a new module is added as part of integration testing, the
software changes. New data flow paths are established, new I/O may
occur, and new control logic is invoked. Regression testing is the re-
execution of some subset of tests that have already been conducted
to ensure that changes have not propagated unintended side effects.
The regression test suite (the subset of tests to be executed) contains
three different classes of test cases:
 A representative sample of tests that will exercise all software
functions.
 Additional tests that focus on software functions that are likely
Testing Strategies / 129

to be affected by the change.


 Tests that focus on the software components that have been
changed.
Smoke Testing
Smoke testing is an integration testing approach that is commonly
used when “shrink-wrapped” software products are being developed.
It is designed as a pacing mechanism for time-critical projects
The smoke testing approach encompasses the following activities:
 Software components that have been translated into code are
integrated into a “build.”
o A build includes all data files, libraries, reusable modules,
and engineered components that are required to
implement one or more product functions.
 A series of tests is designed to expose errors that will keep the
build from properly performing its function.
o The intent should be to uncover “show stopper” errors
that have the highest likelihood of throwing the software
project behind schedule.
 The build is integrated with other builds and the entire product
(in its current form) is smoke tested daily.
o The integration approach may be top down or bottom
up
Smoke testing provides a number of benefits when it is applied on
complex, time-critical software engineering projects:
 Integration risk is minimized.
 The quality of the end-product is improved.
 Progress is easier to assess.
Comments on Integration Testing
The major disadvantage of the top-down approach is the need for
stubs and the attendant testing difficulties that can be associated with
them.
The major disadvantage of bottom-up integration is that "the
program as an entity does not exist until the last module is added"
130 / Introduction to Software Engineering

Selection of an integration strategy depends upon software


characteristics and, sometimes, project schedule. In general, a
combined approach (sometimes-called sandwich testing) that uses
top-down tests for upper levels of the program structure, coupled
with bottom-up tests for subordinate levels may be the best
compromise.

Fig 7: Sandwich Testing


What is a critical module and why should we identify it?
 Address several software requirements
 Has a high level of control (high in the program structure)
structure)
 Is complex or error prone
 Has definite performance requirements

4.3. WHITE-BOX TESTING

White-box testing, sometimes called glass-box testing. Our goal is to


ensure that all statements and conditions have been executed at least
once.
Using white-box testing methods, the software engineer can derive
test cases that
Testing Strategies / 131

 Guarantee that all independent paths within a module have been


exercised at least once
 Exercise all logical decisions on their true and false sides,
 Execute all loops at their boundaries and within their operational
bounds
 Exercise internal data structures to ensure their validity.
Basis Path Testing
Basis path testing is a white-box testing technique first proposed by
Tom McCabe. Test cases derived to exercise the basis set are
guaranteed to execute every statement in the program at least one
time during testing.
Flow Graph Notation
Before the basis path method can be introduced, a simple notation
for the representation of control flow, called a flow graph (or
program graph) must be introduced.

Each circle, called a flow graph node, represents one or more


procedural statements. The arrows on the flow graph, called edges
or links, represent flow of control. Each node that contains a
condition is called a predicate node and is characterized by two or
more edges emanating from it.
Cyclomatic Complexity
Cyclomatic complexity is software metric that provides a quantitative
measure of the logical complexity of a program.
In the context of the basis path testing method, the value computed
for Cyclomatic complexity defines the number of independent paths
in the basis set of a program and provides us with an upper bound
for the number of tests that must be conducted to ensure that all
132 / Introduction to Software Engineering

statements have been executed at least once.


An independent path is any path through the program that
introduces at least one new set of processing statements or a new
condition. When stated in terms of a flow graph, an independent
path must move along at least one edge that has not been traversed
before the path is defined.

Fig 8: An independent path


In the above fig 8 the set of independent paths are as follows:
 Path 1: 1-11
 Path 2: 1-2-3-4-5-10-1-11
 Path 3: 1-2-3-6-8-9-10-1-11
 Path 4: 1-2-3-6-7-9-10-1-11
Note that each new path introduces a new edge. The path
1-2-3-4-5-10-1-2-3-6-8-9-10-1-11
Is not considered an independent path because it is simply a
combination of already specified paths and does not traverse any
new edges.
Paths 1, 2, 3, and 4 constitute a basis set for the above flow graph
How is Cyclomatic complexity computed?
Testing Strategies / 133

 The number of regions of the flow graph corresponds to the


Cyclomatic complexity.
 Cyclomatic complexity, V(G), for a flow graph, G, is defined as
V(G) = E - N + 2
Where E is the number of flow graph edges, N is the number of
flow graph nodes.
 Cyclomatic complexity, V (G), for a flow graph, G, is also
defined as V(G) = P + 1
Where P is the number of predicate nodes contained in the flow
graph G.
Referring once more to the above flow graph the Cyclomatic
complexity can be computed using each of the algorithms just noted:
 The flow graph has four regions.
 V (G) = 11 edges 9 nodes + 2 = 4.
 V(G) = 3 predicate nodes + 1 = 4. cyclomatic complexity for
above flow graph
Deriving Test Cases
134 / Introduction to Software Engineering

Using the above PDL let us describe how derive the test cases
The following steps are used to derive the set of test cases
1. Using the design or code as a foundation, draw a corresponding
flow graph

Fig 9: Deriving test cases


2. Determine the cyclomatic complexity of the resultant flow graph.
V (G) = 6 regions
V (G) = 17 edges -13 nodes + 2 = 6
V (G) = 5 predicate nodes + 1 = 6
3. Determine a basis set of linearly independent paths.
The value of V (G) provides the number of linearly independent
paths through the program control structure. In the case of
procedure average, we expect to specify six paths:
 path 1: 1-2-10-11-13
 path 2: 1-2-10-12-13
 path 3: 1-2-3-10-11-13
 Path 4: 1-2-3-4-5-8-9-2-. . .
Testing Strategies / 135

 Path 5: 1-2-3-4-5-6-8-9-2-. . .
 Path 6: 1-2-3-4-5-6-7-8-9-2-. . .
The ellipsis (. . .) following paths 4, 5, and 6 indicates that any path
through the remainder of the control structure is acceptable.
It is often worthwhile to identify predicate nodes as an aid in the
derivation of test cases. In this case, nodes 2, 3, 5, 6, and 10 are
predicate nodes
4. Prepare test cases that will force execution of each path in the
basis set.
Each test case is executed and compared to expected results. Once
all test cases have been completed, the tester can be sure that all
statements in the program have been executed at least once.

4.4. GRAPH MATRICES

To develop a software tool that assists in basis path testing, a data


structure, called a graph matrix, can be quite useful.
A graph matrix is a square matrix whose size (i.e., number of rows
and columns) is equal to the number of nodes on the flow graph.
Each row and column corresponds to an identified node, and matrix
entries correspond to connections (an edge) between nodes. A
simple example of a flow graph and its corresponding graph matrix is
shown in below figure 10.

Fig 10: Flow graph and its corresponding Graph matrix


136 / Introduction to Software Engineering

Fig 11: Graph matrix in Cyclomatic complexity


What is a graph matrix and how do we extend it use for testing? The
graph matrix is nothing more than a tabular representation of a flow
graph. By adding a link weight to each matrix entry, the graph
matrix can become a powerful tool for evaluating program control
structure during testing
In its simplest form, the link weight is 1 (a connection exists) or 0 (a
connection does not exist).
Represented in this form, the graph matrix is called a connection
matrix.
Referring to above figure 11 each row with two or more entries
represents a predicate node.
Performing the arithmetic shown to the right of the connection
matrix provides us with still another method for determining
Cyclomatic complexity
Control Structure Testing
These broaden testing coverage and improve quality of white-box
testing
Condition Testing
Condition testing is a test case design method that exercises the
logical conditions contained in a program module.
Testing Strategies / 137

A simple condition is a Boolean variable or a relational expression,


possibly preceded with one NOT (¬) operator. A relational
expression takes the form
E1 <relational-operator> E2
Where E1 and E2 are arithmetic expressions and <relational-
operator> is one of the following: <, ≤, =, ≠ (non equality), >,
or ≥.
A compound condition is composed of two or more simple
conditions, Boolean operators, and parentheses.
A condition without relational expressions is referred to as a Boolean
expression.
Types of errors in a condition include the following:
 Boolean operator error (incorrect/missing/extra Boolean
operators), Boolean variable error, Boolean parenthesis error,
Relational operator error and Arithmetic expression error.
The purpose of condition testing is to detect not only errors in the
conditions of a program but also other errors in the program
Data Flow Testing
The data flow testing method selects test paths of a program
according to the locations of definitions and uses of variables in the
program.
To illustrate the data flow testing approach, assume that each
statement in a program is assigned a unique statement number and
that each function does not modify its parameters or global variables.
For a statement with S as its statement number,
DEF(S) = {X | statement S contains a definition of X}
USE(S) = {X | statement S contains a use of X}
If statement S is an if or loop statement, its DEF set is empty and its
USE set is based on the condition of statement S. The definition of
variable X at statement S is said to be live at statement S' if there
exists a path from statement S to statement S' that contains no other
definition of X.
138 / Introduction to Software Engineering

A definition-use (DU) chain of variable X is of the form [X, S, S'],


where S and S' are statement numbers, X is in DEF(S) and USE(S'),
and the definition of X in statement S is live at statement S'.One
simple data flow testing strategy is to require that every DU chain be
covered at least once. We refer to this strategy as the DU testing
strategy.
Loop Testing
Loop testing is a white-box testing technique that focuses exclusively
on the validity of loop constructs. Four different classes of loops can
be defined: simple loops, concatenated loops, nested loops, and
unstructured loops (Figure 12)

Fig 12: Loop Testing


Simple loops: The following set of tests can be applied to simple
loops, where n is the maximum number of allowable passes through
the loop.
1. Skip the loop entirely.
2. Only one passes through the loop.
3. Two passes through the loop.
4. m passes through the loop where m < n.
5. n 1, n, n + 1 passes through the loop.
Testing Strategies / 139

Nested loops. If we were to extend the test approach for simple


loops to nested loops, the number of possible tests would grow
geometrically as the level of nesting increases. This would result in an
impractical number of tests. Beizer suggests an approach that will
help to reduce the number of tests:
 Start at the innermost loop. Set all other loops to minimum
values.
 Conduct simple loop tests for the innermost loop while holding
the outer loops at their minimum iteration parameter (e.g., loop
counter) values.
 Work outward, conducting tests for the next loop, but keeping
all other outer loops at minimum values and other nested loops
to "typical" values.
 Continue until all loops have been tested.
Concatenated loops: Concatenated loops can be tested using the
approach defined for simple loops, if each of the loops is
independent of the other. However, if two loops are concatenated
and the loop counter for loop 1 is used as the initial value for loop
2, then the loops are not independent. When the loops are not
independent, the approach applied to nested loops is recommended.
Unstructured loops. Whenever possible, this class of loops should be
redesigned to reflect the use of the structured programming
constructs

4.5. BLACK-BOX TESTING

Black-box testing, also called behavioral testing, focuses on the


functional requirements of the software.
Black -box testing enables the software engineer to derive sets of
input conditions that will fully exercise all functional requirements for
a program.
Black-box testing attempts to find errors in the following categories:
(1) incorrect or missing functions, (2) interface errors, (3) errors in
data structures or external data base access, (4) behavior or
performance errors, and (5) initialization and termination errors.
140 / Introduction to Software Engineering

White-box testing, which is performed early in the testing process,


black box testing tends to be applied during later stages of testing.

Graph-Based Testing Methods


Software testing begins by creating a graph of important objects and
their relationships and then devising a series of tests that will cover
the graph so that each object and relationship is exercised and errors
are uncovered.

Fig 13: Graph-Based Testing Methods


Collection of nodes that represent objects; links that represent the
relationships between objects; node weights that describe the
properties of a node (e.g., a specific data value or state behavior);
and link weights that describe some characteristics of a link.
A directed link (represented by an arrow) indicates that a
relationship moves in only one direction.
A bidirectional link, also called a symmetric link, implies that the
relationship applies in both directions.
Parallel links are used when a number of different relationships are
Testing Strategies / 141

established between graph nodes.


Referring to the figure 13, a menu select on new file generates a
document window. The node weight of document window provides
a list of the window attributes that are to be expected when the
window is generated. The link weight indicates that the window must
be generated in less than 1.0 second. An undirected link establishes
a symmetric relationship between the new file menu select and
document text, and parallel links indicate relationships between
document window and document text.
The software engineer then derives test cases by traversing the graph
and covering each of the relationships shown. These test cases are
designed in an attempt to find errors in any of the relationships.
Equivalence Partitioning
Equivalence partitioning is a black-box testing method that divides
the input domain of a program into classes of data from which test
cases can be derived.
Test case design for equivalence partitioning is based on an
evaluation of equivalence classes for an input condition.
If a set of objects can be linked by relationships that are symmetric,
transitive, and reflexive, an equivalence class is present
An equivalence class represents a set of valid or invalid states for
input conditions.
Typically, an input condition is either a specific numeric value, a
range of values, a set of related values, or a Boolean condition.
Equivalence classes may be defined according to the following
guidelines:
 If an input condition specifies a range, one valid and two invalid
equivalence classes are defined.
 If an input condition requires a specific value, one valid and two
invalid equivalence classes are defined.
 If an input condition specifies a member of a set, one valid and
one invalid equivalence class are defined.
Test cases are selected so that the largest numbers of attributes of an
142 / Introduction to Software Engineering

equivalence class are exercised at once.


Boundary Value Analysis
Number of errors tends to occur at the boundaries of the input
domain rather than in the "center”.
Boundary value analysis leads to a selection of test cases that exercise
bounding values.
BVA leads to the selection of test cases at the "edges" of the class.
Rather than focusing solely on input conditions, BVA derives test
cases from the output domain as well
How do I create BVA test cases?
 If an input condition specifies a range bounded by values a and
b, test cases should be designed with values a and b and just
above and just below a and b.
 If an input condition specifies a number of values, test cases
should be developed that exercise the minimum and maximum
numbers. Values just above and below minimum and maximum
are also tested.
 Apply guidelines 1 and 2 to output conditions. For example,
assume that a temperature vs. pressure table is required as
output from an engineering analysis program. Test cases should
be designed to create an output report that produces the
maximum (and minimum) allowable number of table entries.
 If internal program data structures have prescribed boundaries
(e.g., an array has a defined limit of 100 entries), be certain to
design a test case to exercise the data structure at its boundary.
Orthogonal Array Testing
The orthogonal array testing method is particularly useful in finding
errors associated with region faults—an error category associated
with faulty logic within a software component
To illustrate the difference between orthogonal array testing and
more conventional “one input item at a time” approaches, consider
a system that has three input items, X, Y, and Z. Each of these input
items has three discrete values associated with it. There are 33 = 27
Testing Strategies / 143

possible test cases. Phadke suggests a geometric view of the possible


test cases associated with X, Y, and Z illustrated in below Figure.
Referring to the figure, one input item at a time may be varied in
sequence along each input axis. This results in relatively limited
coverage of the input domain (represented by the left-hand cube in
the figure).

When orthogonal array testing occurs, an L9 orthogonal array of test


cases is created. The L9 orthogonal array has a balancing property
that is; test cases (represented by black dots in the figure) are
“dispersed uniformly throughout the test domain,”
To illustrate the use of the L9 orthogonal array, consider the send
function for a fax application. Four parameters, P1, P2, P3, and P4,
144 / Introduction to Software Engineering

are passed to the send function.


Each takes on three discrete values. For example, P1 takes on values:
P1 = 1, send it now
P1 = 2, send it one hour later
P1 = 3, send it after midnight
P2, P3, and P4 would also take on values of 1, 2 and 3, signifying
other send functions.
If a “one input item at a time” testing strategy were chosen, the
following sequence of tests (P1, P2, P3, P4) would be specify: (1, 1,
1, 1), (2, 1, 1, 1), (3, 1, 1, 1), (1, 2, 1,1), (1, 3, 1, 1), (1, 1,
2, 1), (1, 1, 3, 1), (1, 1, 1, 2), and (1, 1, 1, 3).
Phadke assesses these test cases in the following manner:
Testing Strategies / 145

Validation Testing
Validation succeeds when software functions in a manner that can be
reasonably expected by the customer
Validation Test Criteria
Software validation is achieved through a series of black-box tests
that demonstrate conformity with requirements.
Both the plan and procedure are designed to ensure that all
functional requirements are satisfied, all behavioral characteristics are
achieved, all performance requirements are attained, documentation
is correct and human-engineered and other requirements are met.
After each validation test case has been conducted, one of two
possible conditions exists:
 The function or performance characteristics conform to
specification and are accepted
 A deviation from specification is uncovered and a deficiency list
is created.
Configuration Review
An important element of the validation process is a configuration
review. The intent of the review is to ensure that all elements of the
software configuration have been properly developed, are cataloged,
and have the necessary detail to bolster the support phase of the
software life cycle. The configuration review, sometimes called an
audit.
Alpha and Beta Testing
A customer conducts the alpha test at the developer’s site.
The beta test is conducted at one or more customer sites by the end-
user of the software. Unlike alpha testing, the developer is generally
not present
System Testing
Software is incorporated with other system elements (e.g., hardware,
people, information), and a series of system integration and
validation tests are conducted. These tests fall outside the scope of
the software process and are not conducted solely by software
146 / Introduction to Software Engineering

engineers. A classic system-testing problem is "finger-pointing." This


occurs when an error is uncovered, and each system element
developer blames the other for the problem.
Recovery Testing
Recovery testing is a system test that forces the software to fail in a
variety of ways and verifies that recovery is properly performed.
Security Testing
Security testing attempts to verify that protection mechanisms built
into a system. During security testing, the tester plays the role(s) of
the individual who desires to penetrate the system.
Stress Testing
Stress testing executes a system in a manner that demands resources
in abnormal quantity, frequency, or volume. For example, (1)
special tests may be designed that generate ten interrupts per
second, when one or two is the average rate, (2) input data rates
may be increased by an order of magnitude to determine how input
functions will respond, (3) test cases that require maximum memory
or other resources are executed, Essentially, the tester attempts to
break the program.
A variation of stress testing is a technique called sensitivity testing.
Sensitivity testing attempts to uncover data combinations within valid
input classes that may cause instability or improper processing.
Performance Testing
Performance testing is designed to test the run-time performance of
software within the context of an integrated system. Performance
testing occurs throughout all steps in the testing process.
Performance tests are often coupled with stress testing and usually
require both hardware and software instrumentation.

4.6. THE ART OF DEBUGGING

Debugging occurs as a consequence of successful testing. That is,


when a test case uncovers an error, debugging is the process that
results in the removal of the error.
Testing Strategies / 147

The Debugging Process


The debugging process begins with the execution of a test case.
Results are assessed and a lack of correspondence between expected
and actual performance is encountered. In many cases, the non
corresponding data are a symptom of an underlying cause as yet
hidden. The debugging process attempts to match symptom with
cause, thereby leading to error correction.

Fig 14: Process of Debugging


The debugging process will always have one of two outcomes:
(1) the cause will be found and corrected, or (2) the cause will not
be found.
Debugging Approaches
In general, three categories for debugging approaches may be
proposed (1) Brute force, (2) backtracking, and (3) cause
elimination.
The brute force category of debugging is probably the most common
and least efficient method for isolating the cause of a software error.
We apply brute force debugging methods when all else fails
Backtracking is a common debugging approach that can be used
successfully in small programs. Beginning at the site where a
symptom has been uncovered, the source code is traced backward
148 / Introduction to Software Engineering

(manually) until the site of the cause is found.


The third approach to debugging—cause elimination—is manifested
by induction or deduction and introduces the concept of binary
partitioning. Data related to the error occurrence are organized to
isolate potential causes. A "cause hypothesis" is devised and the
aforementioned data are used to prove or disprove the hypothesis.
Once a bug has been found, it must be corrected. However, as we
have already noted, the correction of a bug can introduce other
errors and therefore do more harm than good.
Van Vleck [VAN89] suggests three simple questions that every
software engineer should ask before making the "correction" that
removes the cause of a bug:
 Is the cause of the bug reproduced in another part of the
program?
 What "next bug" might be introduced by the fix I am about to
make?
 What could we have done to prevent this bug in the first place?

4.7. PRODUCT METRICS

SOFTWARE QUALITY
Software quality is defined as the conformance to explicitly stated
functional and performance requirements, explicitly documented
development standards, and implicit characteristics that are expected
of all professionally developed software
McCall’s Quality Factors
Factors that affect software quality can be categorized in two broad
groups:
 Factors that can be directly measured (e.g. defects uncovered
during testing)
 Factors that can be measured only indirectly (e.g. usability or
maintainability)
Testing Strategies / 149

Figure 15: focus on three important aspects of a software product:


its operational characteristics, its ability to undergo change, and its
adaptability to new environments.
McCall and his colleagues provide the following descriptions:
Correctness: The extent to which a program satisfies its specification
and fulfills the customer's mission objectives.
Reliability: The extent to which a program can be expected to
perform its intended function with required precision. [It should be
noted that other, more complete definitions of reliability have been
proposed.
Efficiency: The amount of computing resources and code required
by a program to perform its function.
Integrity: Extent to which access to software or data by unauthorized
persons can be controlled.
Usability: Effort required learning, operating, preparing input, and
interpreting output of a program.
Maintainability: Effort required to locate and fix an error in a
program. [This is a very limited definition.]
Flexibility: Effort required to modify an operational program.
Testability: Effort required to test a program to ensure that it
performs its intended function.
150 / Introduction to Software Engineering

Portability: Effort required to transfer the program from one


hardware and/or software system environment to another.
Reusability: Extent to which a program [or parts of a program] can
be reused in other
Applications: related to the packaging and scope of the functions
that the program performs.
Interoperability: Effort required to couple one system to another.

4.8. ISO 9126 QUALITY FACTORS

The ISO 9126 standard was developed in an attempt to identify the


key quality attributes for computer software. The standard identifies
six key quality attributes:
 Functionality - A set of attributes that bear on the existence of a
set of functions and their specified properties. The functions are
those that satisfy stated or implied needs.
o Suitability
o Accuracy
o Interoperability
o Compliance
o Security
 Reliability - A set of attributes that bear on the capability of
software to maintain its level of performance under stated
conditions for a stated period of time.
o Maturity
o Recoverability
 Usability - A set of attributes that bear on the effort needed for
use, and on the individual assessment of such use, by a stated or
implied set of users.
o Learnability
o Understandability
o Operability
 Efficiency - A set of attributes that bear on the relationship
between the level of performance of the software and the
Testing Strategies / 151

amount of resources used, under stated conditions.


o Time Behavior
o Resource Behavior
 Maintainability - A set of attributes that bear on the effort
needed to make specified modifications.
o Stability
o Analyzability
o Changeability
o Testability
 Portability - A set of attributes that bear on the ability of
software to be transferred from one environment to another.
o Instability
o Replaceability
o Adaptability

4.9. A FRAMEWORK FOR PRODUCT METRICS

General principles for selecting product measures and metrics are


discussed in this section. The generic measurement process activities
parallel the scientific method taught in natural science classes
(formulation, collection, analysis, interpretation, feedback).
If the measurement process is too time consuming, no data will ever
be collected during the development process. Metrics should be easy
to compute or developers will not take the time to compute them.
The tricky part is that in addition to being easy compute, the metrics
need to be perceived as being important to predicting whether
product quality can be improved or not.
Measures, Metrics and Indicators
 A measure provides a quantitative indication of the extent,
amount, dimension, capacity, or size of some attribute of a
product or process
 The IEEE glossary defines a metric as “a quantitative measure of
the degree to which a system, component, or process possesses a
given attribute.”
152 / Introduction to Software Engineering

 An indicator is a metric or combination of metrics that provide


insight into the software process, a software project, or the
product itself.
Measurement Principles
 The objectives of measurement should be established before data
collection begins;
 Each technical metric should be defined in an unambiguous
manner;
 Metrics should be derived based on a theory that is valid for the
domain of application (e.g., metrics for design should draw upon
basic design concepts and principles and attempt to provide an
indication of the presence of an attribute that is deemed
desirable);
 Metrics should be tailored to best accommodate specific
products and processes.
Measurement Process
 Formulation. The derivation of software measures and metrics
appropriate for the representation of the software that is being
considered.
 Collection. The mechanism used to accumulate data required to
derive the formulated metrics.
 Analysis. The computation of metrics and the application of
mathematical tools.
 Interpretation. The evaluation of metrics results in an effort to
gain insight into the quality of the representation.
 Feedback. Recommendations derived from the interpretation of
product metrics transmitted to the software team.
S/W metrics will be useful only if they are characterized effectively
and validated to that their worth is proven.
 A metric should have desirable mathematical properties.
 When a metric represents a S/W characteristic that increases
when positive traits occur or decreases when undesirable traits
are encountered, the value of the metric should increase or
Testing Strategies / 153

decrease in the same manner.


 Each metric should be validated empirically in a wide variety of
contexts before being published or used to make decisions.
Goal-Oriented Software Measurement
 The Goal/Question/Metric Paradigm
o Establish an explicit measurement goal that is specific to
the process activity or product characteristic that is to be
assessed
o Define a set of questions that must be answered in order
to achieve the goal, and
o Identify well-formulated metrics that help to answer
these questions.
A goal definition template can be used to define each measurement
goal.
 Goal definition template
o Analyze {the name of activity or attribute to be
measured}
o for the purpose of {the overall objective of the analysis}
o with respect to {the aspect of the activity or attribute
that is considered}
o from the viewpoint of {the people who have an interest
in the measurement}
o In the context of {the environment in which the
measurement takes place}.
The Attributes of Effective S/W Metrics
 Simple and computable. It should be relatively easy to learn how
to derive the metric, and its computation should not demand
inordinate effort or time
 Empirically and intuitively persuasive. The metric should satisfy
the engineer’s intuitive notions about the product attribute under
consideration
 Consistent and objective. The metric should always yield results
that are unambiguous.
154 / Introduction to Software Engineering

 Consistent in its use of units and dimensions. The mathematical


computation of the metric should use measures that do not lead
to bizarre combinations of unit.
 Programming language independent. Metrics should be based on
the analysis model, the design model, or the structure of the
program itself.
 An effective mechanism for quality feedback. That is, the metric
should provide a software engineer with information that can
lead to a higher quality end product.

4.10. METRICS FOR ANALYSIS MODEL

These metrics examine the analysis model with the intent of


predicting the “size” of the resultant system. Size is an indicator of
design complexity and is almost always an indicator of increased
coding, integration and testing effort.
Function-Based Metrics
The function-point metric can be used effectively as a means for
measuring the functionality delivered by the system.
Using historical data FP metric can be used to
 Estimate cost or effort required to design code and test the
software.
 Predict no. of errors that will be encountered during testing
 Forecast no. of components and no. of projected source lines in
the implemented system.
Function points are derived using an empirical relationship based on
countable measures of software information domain.
Number of External inputs (EI): Each external input originates from
a user or is transmitted from another application. Inputs are often
used to update Internal Logic Files.
Number of External Outputs: Each external output is derived data
within the application that provides information to the user. External
outputs refer to reports, screens, and error messages
Number of external inquiries: An external inquiry is defined as an
Testing Strategies / 155

online input that results in generation of some intermediate software


response in form of an online output
Number of internal logical files: Each internal logical file is a logical
grouping of data that resides within the applications boundary and is
maintained via external inputs.
Number of external interface files: Each external interface file is a
logical grouping of data that resides external to application but
provides information that may be of use to application

Three user inputs—password, panic button, and


activate/deactivate—are shown in the figure along with two
inquires—zone inquiry and sensor inquiry. One file (system
configuration file) is shown. Two user outputs (messages and sensor
156 / Introduction to Software Engineering

status) and four external interfaces (test sensor, zone setting,


activate/deactivate, and alarm alert) are also present. These data,
along with the appropriate complexity, are shown in above table.
The count total shown in Figure must be adjusted using Equation
FP = count total * [0.65 + 0.01* ∑ (Fi)]
Where count total is the sum of all FP entries obtained from above
figure and If (i = 1to 14) are "complexity adjustment values." For
the purposes of this example, we assume that (∑Fi) is 46 (a
moderately complex product). Therefore,
FP = 50 * [0.65 + 0.01 *46] = 56
Metrics for specification of quality
List of characteristics that can be used to assess the quality of the
analysis model and the corresponding requirements specification:
specificity (lack of ambiguity), completeness, correctness,
understandability, verifiability, internal and external consistency,
achievability, concision, traceability, modifiability, precision, and
reusability.
We assume that there are Nr requirements in a specification, such
that

Where nf is the number of functional requirements and Nnf is the


number of non-functional (e.g., performance) requirements.
To determine the specificity (lack of ambiguity) of requirements

Is the number of requirements for which all reviewers had


identical interpretations? The closer the value of Q to 1, the lower is
the ambiguity of the specification.
The completeness of functional requirements can be determined by
computing the ratio
Testing Strategies / 157

Is the number of unique function requirements, , is the


number of inputs (stimuli) defined or implied by the specification,
and ns is the number of states specified. The Q2 ratio measures the
percentage of necessary functions that have been specified for a
system

4.11. METRICS FOR DESIGN MODEL

Architectural Design Metrics


Architectural design metrics focus on characteristics of the program
architecture. These metrics are black box in the sense that they do
not require any knowledge of the inner workings of a particular
software component.
Card and Glass define three software design complexity measures:
structural complexity, data complexity, and system complexity.
Structural complexity of a module i is defined in the following
manner:
S (i) = f 2out(i)
Where fout (i) is the fan-out of module i. (Fan-out means number of
modules directly sub-ordinate to module i)
Data complexity provides an indication of the complexity in the
internal interface for a module i and is defined as
D (i) = v (i)/ [fout (i) +1]
Where v (i) is the number of input and output variables that are
passed to and from module i.
System complexity is defined as the sum of structural and data
complexity, specified as
C(i) = S(i) + D(i)
As each of these complexity values increases, the overall
architectural complexity of the system also increases. This leads to a
greater likelihood that integration and testing effort will also increase.
158 / Introduction to Software Engineering

Morphology (shape) metrics: a function of the number of modules


and the number of interfaces between modules
Size = n + a

Fig 16: Complexity of design model


Where n is the number of nodes and a is the number of arcs. For the
architecture shown in Figure 16.
Size = 17 + 18 = 35
Depth = the longest path from the root (top) node to a leaf node.
For the architecture shown in Figure 16, depth = 4.
Width = maximum number of nodes at any one level of the
architecture. For the architecture shown in fig 16, width = 6.arc-to-
node ratio, r = a/n, r = 18/17 = 1.06.
DSQI (Design Structure Quality Index)
 US air force has designed the DSQI
 Compute s1 to s7 from data and architectural design
 S1:Total number of modules
 S2:Number of modules whose correct function depends on the
data input
 S3:Number of modules whose function depends on prior
processing
 S4:Number of data base items
Testing Strategies / 159

 S5:Number of unique database items


 S6: Number of database segments
 S7:Number of modules with single entry and exit
 Calculate D1 to D6 from s1 to s7 as follows:
 D1=1 if standard design is followed otherwise D1=0
 D2(module independence)=(1-(s2/s1))
 D3(module not depending on prior processing)=(1-(s3/s1))
 D4(Data base size)=(1-(s5/s4))
 D5(Database compartmentalization)=(1-(s6/s4)
 D6(Module entry/exit characteristics)=(1-(s7/s1))
DSQI=∑ WiDi
Where i = 1 to 6, wi
Is the relative weighting of the importance of each of the
intermediate values, and ∑wi= 1 (if all Di are weighted equally,
then wi= 0.167).
DSQI of present design be compared with past DSQI. If DSQI is
significantly lower than the average, further design work and review
are indicated
Metrics for Object-oriented design
Whitmire [WHI97] describes nine distinct and measurable
characteristics of an OO design:
Size: Size is defined in terms of four views: population, volume,
length, and functionality
Complexity How classes of an OO design are interrelated to one
another
Coupling The physical connections between elements of the OO
design
Sufficiency “the degree to which an abstraction possesses the
features required of it, or the degree to which a design component
possesses features in its abstraction, from the point of view of the
current application.”
160 / Introduction to Software Engineering

Completeness An indirect implication about the degree to which the


abstraction or design component can be reused
Cohesion The degree to which all operations working together to
achieve a single, well-defined purpose
Primitiveness Applied to operations and classes, the degree to which
an operation is atomic
Similarity The degree to which two or more classes are similar in
terms of their structure, function, behavior, or purpose
Volatility Measures the likelihood that a change will occur
Class-Oriented Metrics-The CK Metrics suite
Proposed by Chidamber and Kemerer
Weighted methods per class: Assume that n methods of complexity
c1,c2,----cn are defines for a class C
WMC=∑ci for i=1 ton
The number of methods and their complexity are reasonable
indicators of amount or effort required to implement and test a class
Depth of the inheritance tree: Max length form node to root of tree
referring to fig DIT=4 as DIT grows low-level classes will inherit
many methods this leads to many difficulties & greater design
complexity
Testing Strategies / 161

Number of children: The sub classes that are immediately


subordinate to a class in class hierarchy are termed its children. As
NOC grows reuse increases but abstraction represented by parent
class is diluted if some children are not appropriate members of
parent class.
Coupling between object classes: As CBO increases reusability
decreases
Response for a class: A set of methods that can potentially be
executed in response to a message.RFC is no. of methods in response
set. As RFC increases complexity increases.
Lack of cohesion in methods: LCOM is no. of methods that access
one or more of same attributes. If no methods access same attribute
LCOM=0

4.12. CLASS-ORIENTED METRICS: THE MOOD METRICS


SUITE

Method inheritance factor

Coupling factor:
162 / Introduction to Software Engineering

If CF increases the complexity of OO software also increases.


Class-Oriented Metrics Proposed by Lorenz and Kidd
Lorenz and Kidd divide class-based metrics into four broad categories
Size-oreinted metrics: focus on count of attributes and operations for
an individual class
Inheritance-based metrics: focus on manner in which operations are
reused through class hierarchy
 Metrics for class internal : focus on cohesion
 Metrics for external : focus on coupling
Component-Level design metrics
 Cohesion metrics: a function of data objects and the focus of
their definition
 Coupling metrics: a function of input and output parameters,
global variables, and modules called

Complexity metrics: hundreds have been proposed (e.g., Cyclomatic


complexity)
Testing Strategies / 163

Operation-Oriented Metrics
 average operation size
 operation complexity
 average number of parameters per operation
Interface Design Metrics
Layout appropriateness: a function of layout entities, the geographic
position and the “cost” of making transitions among entities. This is
a worthwhile design metric for interface design.
Metrics for source code
 HSS(Halstead Software science)
 Primitive measure that may be derived after the code is
generated or estimated once design is complete
 n1 = the number of distinct operators that appear in a program
 n2 = the number of distinct operands that appear in a program
 N1 = the total number of operator occurrences.
 N2 = the total number of operand occurrence.
Length N = n1 log2 n1 + n2 log2 n2
Program volume V = N log2 (n1 + n2)
Volume ratio L=2/n1 * n2/N2
Metrics for Testing
 Program Level and Effort
 PL = 1/[(n1 / 2) x (N2 / n2 l)]
 e = V/PL
Metrics for maintenance
IEEE standard suggests a software maturity index that provides
indication of stability of software product.
 Mt = the number of modules in the current release
 Fc = the number of modules in the current release that have
been changed
 Fa = the number of modules in the current release that have
been added.
164 / Introduction to Software Engineering

 Fd = the number of modules from the preceding release that


were deleted in the current release
 The Software Maturity Index, SMI, is defined as:
 SMI = [Mt – (Fc + Fa + Fd)/ Mt ]

4.13. METRICS FOR PROCESS AND PRODUCTS

Process metrics are collected across all projects and over long periods
of time. Their aim is to provide set of process indicators that lead to
long term software process improvement
Project Metrics enable a software project manager to
 Assess status of an ongoing project
 Track potential risks
 Uncover problem areas before they go critical
 Adjust work flow
 Evaluate project teams ability to control quality

Fig 17: Project Metrics for Product, Process, People and Technology
Testing Strategies / 165

4.14. SOFTWARE MEASUREMENT

Software measurement can be categorized as


Direct Measurement
 Direct measure of software process include cost and effort
 Direct measure of product includes lines of code, Execution
speed, memory size, defects per reporting time.
Indirect Measurement
 Indirect measure examines the quality of software product
itself(e.g. :-functionality, complexity, efficiency, reliability and
maintainability)
Reasons for measurement
 To gain baseline for comparison with future assessment
 To determine status with respect to plan
 To predict the size, cost and duration estimate
 To improve the product quality and process improvement
The metrics in software Measurement are
 Size oriented metrics
 Function oriented metrics
 Object oriented metrics
 Web based application metric
Size oriented metrics
 It totally concerned with the measurement of software.
 A software company maintains a simple record for calculating
the size of the software.
It includes
 errors per KLOC (thousand lines of code)
 defects per KLOC
 $ per LOC
 pages of documentation per KLOC
 errors per person-month
166 / Introduction to Software Engineering

 Errors per review hour


 LOC per person-month
 $ per page of documentation
Function Oriented Metrics
 Measures the functionality derived by the application
 The most widely used function oriented metric is Function point
 Function point is independent of programming language
Typical Function-oriented metrics
 errors per FP (thousand lines of code),defects per FP, $ per
FP,pages of documentation per FP,FP per person-month
Object-Oriented Metrics
Relevant for object oriented programming
Based on the following
 Number of scenario scripts (use-cases)
 Number of key classes Key classes are highly independent
components that are defined early in object-oriented analysis
 Number of support classes required to implement the system
but are not immediately related to the problem domain
 Average number of support classes per key class (analysis class)
 Number of subsystems an aggregation of classes that support a
function that is visible to the end-user of a system
Use-Case Oriented Metrics
Web Engineering Project Metrics
Measures that can be collected are
 Number of static Web pages the end-user has no control over
the content displayed on the page
 Number of dynamic Web pages end-user actions result in
customized content displayed on the page
 Number of internal page links (internal page links are pointers
that provide a hyperlink to some other Web page within the
Web App
 Number of persistent data objects As the number of persistent
Testing Strategies / 167

data objects grows the complexity of webapps also grows


 Number of external systems interfaced As the requirement for
interfacing grows, system complexity and development effort
also increases
 Number of static content objects They encompass
graphical,audio,video information incorporated into the web app
 Number of dynamic content objects Generated based on end-
user actions
 Number of executable functions An executable function provides
some computational service to end-user. As number of functions
grows construction effort increases
We can define a metric that reflects degree of end-user
customization for web app to effort expended on web-project
Nsp=number of static pages
Ndp=number of dynamic pages
Customization index C=Ndp/(Ndp+Nsp)
Metrics for Software Quality
The overriding goal of software engineering is to produce a high-
quality system, application, or product. To achieve this goal,
software engineers must apply effective methods coupled with
modern tools within the context of a mature software process.
Measuring Quality
Correctness the degree to which a program operates according to
specification
Correctness=defects/KLOC
Maintainability the degree to which a program is amenable to change
Integrity the degree to which a program is impervious to outside
attack
Integrity=Sigma [1-(threat (1-security))]
 Threat : Probability that an attack of specific type will occur
within a given time
 Security : Probability that an attack of a specific type will be
168 / Introduction to Software Engineering

repelled
Usability the degree to which a program is easy to use
Defect Removal Efficiency
A quality metric that provides benefit at both the project and
process level is defect removal efficiency (DRE)
DRE is defined in the following manner:
DRE = E/ (E + D)
E is the number of errors found before delivery of the software to
the end-user
D is the number of defects found after delivery
Ideal value of DRE is 1, which is no defect found in software.
As E increases (for a given value of D), the overall value of DRE
begins to approach 1. In fact, as E increases, it is likely that the final
value of D will decrease (errors are filtered out before they become
defects).
DRE can also be used within the project to assess a team’s ability to
find errors before they are passed to the next framework activity or
software engineering task.
When used in this context, we redefine DRE as
DREi= Ei/ (Ei+ Ei+1)
Where Ei is the number of errors found during software engineering
activity i
Ei+1 are the number of errors found during software engineering
activity i+1 that is traceable to errors that were not discovered in
software engineering activity i.


CHAPTER 5
Risk Management
KEY CONCEPTS
5.1. REACTIVE VS PROACTIVE RISK STRATEGIES ................. 169
5.2. SOFTWARE RISKS........................................................... 170
5.3. RISK IDENTIFICATION ................................................... 170
5.4. ASSESSING PROJECT RISK .............................................. 171
5.5. RISK COMPONENTS AND DRIVERS ................................ 172
5.6. RISK PROJECTION.......................................................... 172
5.7. RISK MITIGATION MONITORING AND MANAGEMENT
(RMMM) .................................................................... 176
5.8. RMMM PLAN ................................................................. 178
5.9. QUALITY MANAGEMENT ............................................. 179
5.10. SOFTWARE QUALITY ASSURANCE ................................ 181
5.11. SOFTWARE REVIEWS ..................................................... 182
5.12. SOFTWARE RELIABILITY ................................................ 187
5.13. ISO 9000 QUALITY STANDARDS ................................... 188

isk is an undesired event or circumstance that occur while a

R project is underway. It is necessary for the project manager to


anticipate and identify different risks that a project may be
susceptible to.
Risk Management aims at reducing the impact of all kinds of risk that
may affect a project by identifying, analyzing and managing them

5.1. REACTIVE VS PROACTIVE RISK STRATEGIES

Reactive Risks: project team reacts to risks when they occur. The
team flies into action in an attempt to correct the problem rapidly.
This is often called a fire fighting mode. When this fails, “crisis
management” takes over and the project is in real jeopardy.
Proactive Risks: Potential risks are identified, their probability and
impact are assessed, and they are ranked by importance.
The software team establishes a plan for managing risk. The primary
170 / Introduction to Software Engineering

objective is to avoid risk, but because not all risks can be avoided,
the team works to develop a contingency plan that will enable it to
respond in a controlled and effective manner.

5.2. SOFTWARE RISKS

Risk always involves two characteristics


 Uncertainty the risk may or may not happen;
 Loss if the risk becomes a reality, unwanted consequences or
losses will occur.
When risks are analyzed, it is important to quantify the level of
uncertainty and the degree of loss associated with each risk
The type of risks that we likely encounter as software is built?
 Project risk: Threaten the project plan and affect schedule and
resultant cost
 Technical risk: Threaten the quality and timeliness of software to
be produced
 Business risk: Threaten the viability of software to be built
 Known risk: These risks can be recovered from careful evaluation
 Predictable risk: Risks are identified by past project experience
 Unpredictable risk: Risks that occur and may be difficult to
identify

5.3. RISK IDENTIFICATION

Risk identification is a systematic attempt to specify threats to the


project plan (estimates, schedule, resource loading, etc.). By
identifying known and predictable risks, the project manager takes a
first step toward avoiding them when possible and controlling them
when necessary.
Two distinct types of risks for each category of risks specified above:
 Generic risks are a potential threat to every software project.
 Product-specific risks can be identified only by those with a clear
understanding of the technology, the people, and the
environment that is specific to the project at hand.
Risk Management / 171

 One method for identifying risks is to create a risk item checklist.


The checklist can be used for risk identification
 Product size risks associated with the overall size of the software
to be built or modified.
 Business impact risks associated with constraints imposed by
management or the marketplace.
 Customer characteristics risks associated with the sophistication
of the customer and the developer's ability to communicate with
the customer in a timely manner.
 Process definition risks associated with the degree to which the
software process has been defined
 Development environment risks associated with the availability
and quality of the tools to be used to build the product.
 Technology to be built risks associated with the complexity of
the system to be built and the "newness" of the technology that
is packaged by the system.
 Staff size and experience risks associated with the overall
technical and project experience of the software engineers who
will do the work.

5.4. ASSESSING PROJECT RISK

 Have top software and customer managers formally committed


to support the project?
 Are end-users enthusiastically committed to the project and the
system/product to be built?
 Are requirements fully understood by the software engineering
team and their customers?
 Have customers been involved fully in the definition of
requirements?
 Do end-users have realistic expectations?
 Is project scope stable?
 Does the software engineering team have the right mix of skills?
 Are project requirements stable?
 Does the project team have experience with the technology to
172 / Introduction to Software Engineering

be implemented?
 Is the number of people on the project team adequate to do the
job?
 Do all customer/user constituencies agree on the importance of
the project and on the requirements for the system/product to
be built?

5.5. RISK COMPONENTS AND DRIVERS

The risk components are defined in the following manner.


 Performance risk—the degree of uncertainty that the product
will meet its requirements and be fit for its intended use.
 Cost risk—the degree of uncertainty that the project budget will
be maintained.
 Support risk—the degree of uncertainty that the resultant
software will be easy to correct, adapt, and enhance.
 Schedule risk—the degree of uncertainty that the project
schedule will be maintained and that the product will be
delivered on time.
The impact of each risk driver on the risk component is divided into
one of four impact categories—negligible, marginal, critical, or
catastrophic.

5.6. RISK PROJECTION

It estimates the impact of risk on the project and the product


Risk projection, also called risk estimation, attempts to rate each risk
in two ways:
 The likelihood or probability that the risk is real
 The consequences of the problems associated with the risk,
should it occur.
The project planner, along with other managers and technical staff,
performs four risk projection activities
 establish a scale that reflects the perceived likelihood of a risk
 delineate the consequences of the risk
Risk Management / 173

 estimate the impact of the risk on the project and the product,
note the overall accuracy of the risk projection so that there will
be no misunderstandings
Developing a Risk Table
A risk table provides a project manager with a simple technique for
risk projection
Impact values
Catastrophic-1 Critical-2 Marginal-3 Negligible-4

A project team begins by listing all risks This can be accomplished


with the help of the risk item checklists Each risk is categorized in the
second column (e.g., PS implies a project size risk, BU implies a
business risk) The probability of occurrence of each risk is entered in
the next column of the table. The probability value for each risk can
be estimated by team members individually. Next, the impact of
each risk is assessed.
Once the first four columns of the risk table have been completed,
174 / Introduction to Software Engineering

the table is sorted by probability and by impact. High-probability,


high-impact risks percolate to the top of the table, and low-
probability risks drop to the bottom. This accomplishes first-order
risk prioritization.
The project manager studies the resultant sorted table and defines a
cutoff line. The cutoff line (drawn horizontally at some point in the
table) implies that only risks that lie above the line will be given
further attention. Risks that fall below the line are re-evaluated to
accomplish second-order prioritization.
Referring to Figure 6.3, risk impact and probability have a distinct
influence on management concern. A risk factor that has a high
impact but a very low probability of occurrence should not absorb a
significant amount of management time. However, high-impact risks
with moderate to high probability and low-impact risks with high
probability should be carried forward into the risk analysis steps that
follow.
Risk Management / 175

The column labeled RMMM contains a pointer into a Risk


Mitigation, Monitoring and Management Plan or alternatively, a
collection of risk information sheets developed for all risks that lie
above the cutoff.
Assessing Risk Impact
The overall risk exposure, RE, is determined using the following
relationship
RE = P x C
Where
P is the probability of occurrence for a risk, and
C is the cost to the project should the risk occur
For example, assume that the software team defines a project risk in
the following manner:
Risk identification. Only 70 percent of the software components
scheduled for reuse will, in fact, be integrated into the application.
The remaining functionality will have to be custom developed.
Risk probability. 80% (likely).
Risk impact. 60 reusable software components were planned. If only
70 percent can be used, 18 components would have to be
developed from scratch (in addition to other custom software that
has been scheduled for development). Since the average component
is 100 LOC and local data indicate that the software engineering
cost for each LOC is $14.00, the overall cost (impact) to develop
the components would be 18 x 100 x 14 = $25,200.
Risk exposure. RE = 0.80 x 25,200 ~=$20,200.
Risk Refinement
One way to do this is to represent the risk in condition-transition-
consequence (CTC).
Using the CTC format for the reuse risk we can write:
Given that all reusable software components must conform to
specific design standards and that some do not conform, then there
is concern that (possibly) only 70 percent of the planned reusable
176 / Introduction to Software Engineering

modules may actually be integrated into the as-built system, resulting


in the need to custom engineer the remaining 30 percent of
components.
This general condition can be refined in the following manner:
Subcondition 1. Certain reusable components were developed by a
third party with no knowledge of internal design standards.
Subcondition 2. The design standard for component interfaces has
not been solidified and may not conform to certain existing reusable
components.
Subcondition 3. Certain reusable components have been
implemented in a language that is not supported on the target
environment.

5.7. RISK MITIGATION MONITORING AND MANAGEMENT


(RMMM)

Its goal is to assist project team in developing a strategy for dealing


with risk
If a software team adopts a proactive approach to risk, avoidance is
always the best strategy. This is achieved by developing a plan for
risk mitigation. For example, assume that high staff turnover is noted
as a project risk, r1. Based on past history and management
intuition, the likelihood, l1, of high turnover is estimated to be 0.70
percent, and the impact, x1, is projected at level 2.
To mitigate this risk, project management must develop a strategy
for reducing turnover. Among the possible steps to be taken are:
 Meet with current staff to determine causes for turnover (e.g.,
poor working conditions, low pay, and competitive job market).
 Mitigate those causes that are under our control before the
project starts.
 Once the project commences, assume turnover will occur and
develop techniques to ensure continuity when people leave.
 Organize project teams so that information about each
development activity is widely dispersed.
 Define documentation standards and establish mechanisms to be
Risk Management / 177

sure that documents are developed in a timely manner.


 Conduct peer reviews of all work (so that more than one person
is "up to speed”).
 Assign a backup staff member for every critical technologist.
As the project proceeds, risk monitoring activities commence.
In the case of high staff turnover, the following factors can be
monitored:
 General attitude of team members based on project pressures.
 The degree to which the team has jelled.
 Interpersonal relationships among team members.
 Potential problems with compensation and benefits.
 The availability of jobs within the company and outside it.
In addition to monitoring these factors, the project manager should
monitor the effectiveness of risk mitigation steps. For example, a risk
mitigation step noted here called
Risk management and contingency planning assumes that mitigation
efforts have failed and that the risk has become a reality. Continuing
the example, the project is well underway and a number of people
announce that they will be leaving. If the mitigation strategy has
been followed, backup is available, information is documented, and
knowledge has been dispersed across the team.
In addition, the project manager may temporarily refocus resources
(and readjust the project schedule) to those functions that are fully
staffed, enabling newcomers who must be added to the team to “get
up to speed.” Those individuals who are leaving are asked to stop all
work and spend their last weeks in “knowledge transfer mode.”
It is important to note that RMMM steps incur additional project
cost. For example, spending the time to "backup" every critical
technologist costs money.
There are three issues of RMMM
 Actions to be taken in the event that mitigation steps have failed
and the risk has become a live problem
178 / Introduction to Software Engineering

 Devise RMMP(Risk Mitigation Monitoring And Management


Plan)

5.8. RMMM PLAN

 Risk Avoidance (mitigation) Proactive planning for risk


avoidance. This is achieved by developing a plan for risk
mitigation.
 Risk Monitoring what factors can we track that will enable us to
determine if the risk is becoming more or less likely?
o Assessing whether predicted risk occur or not
o Ensuring risk aversion steps are being properly applied
o Collection of information for future risk analysis
o Determine which risks caused which problems
 Risk Management what contingency plans do we have if the risk
becomes a reality?
o Contingency planning
A risk management strategy can be included in the software project
plan or the risk management steps can be organized into a separate
Risk Mitigation, Monitoring and Management Plan. The RMMM plan
documents all work performed as part of risk
Each risk is documented individually by using a Risk Information
Sheet.
RIS is maintained using a database system
Once RMMM has been documented and the project has begun, risk
mitigation and monitoring steps commence.
Risk Management / 179

5.9. QUALITY MANAGEMENT

Quality Concepts
Variation control is the heart of quality control
Form one project to another; we want to minimize the difference
between the predicted resources needed to complete a project and
the actual resources used, including staff, equipment, and calendar
time
Quality
The American Heritage Dictionary defines quality as “a characteristic
or attribute of something.” As an attribute of an item, quality refers
180 / Introduction to Software Engineering

to measurable characteristics-things we are able to compare to


known standards such as length, color, electrical properties, and
malleability
When we examine an item based on its measurable characteristics,
two kinds of quality may be encountered: quality of design and
quality of conformance.
Quality of design: Refers to characteristics that designers specify for
the end product. A software development, quality of design
encompasses requirements, specifications, and the design of the
system
Quality of conformance is the degree to which the design
specifications are followed during manufacturing. It focuses on
implementation. If the implementation follows the design and the
resulting system meets its requirements and performance goals,
conformance quality is high.
Robert Glass argues that a more “intuitive” relationship is in order:
User satisfaction = compliant product + good quality + delivery
within budget and schedule
At the bottom line, Glass contends that quality is important, but if
the user isn’t satisfied, nothing else really matters. “A product’s
quality is a function of how much it changes the world for the
better.”
Quality Control
It involves the series of inspections, reviews, and test used
throughout software process to ensure each work product meets the
requirement placed up on it.
Quality Assurance
It consists of a set of auditing and reporting functions that assess the
effectiveness and completeness of quality control activities. The goal
of quality assurance is to provide management with the data
necessary to be informed about product quality, thereby gaining
insight and confidence that product quality is meeting its goals.
Risk Management / 181

Cost of Quality
The cost of quality includes all costs incurred in the pursuit of quality
or in performing quality-related activities.
Quality costs may be divided into costs associated with prevention,
appraisal, and failure.
Prevention costs include quality planning, formal technical reviews,
test equipment, training
Appraisal costs include activities to gain insight into product
condition the “first time through” each process. Examples of
appraisal costs include in-process and inter process inspection,
equipment calibration and maintenance and testing
Failure costs are those that would disappear if no defects appeared
before shipping a product to customers. Failure costs may be
subdivided into internal failure costs and external failure costs.
Internal failure costs are incurred when we detect a defect in our
product prior to shipment. Internal failure costs include rework,
repair & failure mode analysis
External failure costs are associated with defects found after the
product has been shipped to the customer. Examples of external
failure costs are complaint resolution, product return and
replacement, help line support and warranty work

5.10. SOFTWARE QUALITY ASSURANCE

Software quality assurance (SQA) is the concern of every software


engineer to reduce cost and improve product time-to-market.
A Software Quality Assurance Plan is not merely another name for a
test plan, though test plans are included in an SQA plan.SQA
activities are performed on every software project.
Software quality is defined as Conformance to explicitly stated
functional and performance requirements, explicitly documented
development standards, and implicit characteristics that are expected
of all professionally developed software.
182 / Introduction to Software Engineering

SQA Activities
Prepares an SQA plan for a project. The plan identifies
 evaluations to be performed
 audits and reviews to be performed
 standards that are applicable to the project
 procedures for error reporting and tracking
 documents to be produced by the SQA group
 amount of feedback provided to the software project team
Participates in the development of the project’s software process
description.
The SQA group reviews the process description for compliance with
organizational policy, internal software standards, externally imposed
standards (e.g., ISO-9001), and other parts of the software project
plan.
Reviews software engineering activities to verify compliance with the
defined software process.
Identifies, documents, and tracks deviations from the process and
verifies that corrections have been made.
Audits designated software work products to verify compliance with
those defined as part of the software process.
Reviews selected work products; identifies, documents, and tracks
deviations; verifies that corrections have been made and periodically
reports the results of its work to the project manager.
Ensures that deviations in software work and work products are
documented and handled according to a documented procedure.
Deviations may be encountered in project plan, process description,
and applicable standards
Records any noncompliance and reports to senior management.
Noncompliance items are tracked until they are resolved.

5.11. SOFTWARE REVIEWS

Purpose is to find errors before they are passed on to another


software engineering activity.
Risk Management / 183

What Are Reviews?


 a meeting conducted by technical people for technical people
 a technical assessment of a work product created during the
software engineering process
 a software quality assurance mechanism
Software engineers (and others) conduct formal technical reviews
(FTRs) for software quality assurance.
Using formal technical reviews (walkthroughs or inspections) is an
effective means for improving software quality
Cost Impact of software defects
Defect amplification and removal
Formal Technical reviews
A FTR is a software quality control activity performed by software
engineers and others. The objectives are:
 To uncover errors in function, logic or implementation for any
representation of the software.
 To verify that the software under review meets its requirements.
 To ensure that the software has been represented according to
predefined standards.
 To achieve software that is developed in a uniform manner and
 To make projects more manageable.
In addition, FTR serves as a training ground enabling junior engineers
to observe different approaches to s/w analysis, design, and
construction and testing
Review meeting
The Review meeting in a FTR should abide to the following
constraints
 Review meeting members should be between three and five.
 Every person should prepare for the meeting and should not
require more than two hours of work for each person.
 The duration of the review meeting should be less than two
hours.
184 / Introduction to Software Engineering

The focus of FTR is on a work product (ex requirement


specification, a detailed component design, a source code listing for
a component.)
The individual who has developed the work product i.e., the
producer informs the project leader that the work product is
complete and that a review is required.
The project leader contacts a review leader, who evaluates the
product for readiness, generates copy of product material and
distributes them to two or three review members for advance
preparation .Each reviewer is expected to spend between one and
two hours reviewing the product, making notes. The review leaders
also review the product and establish an agenda for the review
meeting.
The review meeting is attended by review leader, all reviewers and
the producer. One of the reviewer acts as a recorder, who notes
down all important points discussed in the meeting.
The meeting (FTR) is started by introducing the agenda of meeting
and then the producer introduces his product. Then the producer
“walkthrough” the product, the reviewers raise issues which they
have prepared in advance. If errors are found the recorder notes
down.
At the end of the review, all attendees of the FTR must decide
whether to (1) accept the product without further modification, (2)
reject the product due to severe errors(once corrected, another
review must be performed), or (3) accept the product provisionally
(minor errors have been encountered and must be corrected, but no
additional review will be required). The decision made, all FTR
attendees complete a sign-off, indicating their participation in the
review and their concurrence with the review team's findings.
Review Reporting and record keeping
During the FTR, a reviewer (recorder) records all issues that have
been raised
 A review summary report answers three questions
o What was reviewed?
Risk Management / 185

o Who reviewed it?


o What were the findings and conclusions?
Review summary report is a single page form with possible
attachments
The review issues list serves two purposes
 To identify problem areas in the product
 To serve as an action item checklist that guides the producer as
corrections are made
It is important to establish a follow-up procedure to ensure that
items on the issues list have been properly corrected. One approach
is to assign the responsibility for follow-up to the review leader.
Review Guidelines
 Review the product, not the producer
 Set an agenda and maintain it
 Limit debate and rebuttal
 Enunciate problem areas, but don’t attempt to solve every
problem noted
 Take written notes
 Limit the number of participants and insist upon advance
preparation.
 Develop a checklist for each product i.e. likely to be reviewed
 Allocate resources and schedule time for FTRS
 Conduct meaningful training for all reviewer
 Review your early reviews
Sample-Driven Reviews
The Lin and his colleagues suggest a sample-driven review process in
which samples of all software engineering work products are
inspected to determine which work products is most error prone.
SDRs attempt to quantify those work products that are primary
targets for full FTRs.
To accomplish this …
 Inspect a fraction ai of each software work product, i. Record
186 / Introduction to Software Engineering

the number of faults, fi found within ai.


 Develop a gross estimate of the number of faults within work
product i by multiplying fi by 1/ai.
 Sort the work products in descending order according to the
gross estimate of the number of faults in each.
 Focus available review resources on those work products that
have the highest estimated number of faults.
Statistical Software Quality Assurance
Statistical quality assurance reflects a growing trend throughout
industry to become quantitative about quality. For software,
statistical quality assurance implies the following steps:
 Information about software defects is collected and categorized.
 An attempt is made to trace each defect to its underlying cause
(e.g., non-conformance to specifications, design error, violation
of standards, and poor communication with the customer).
 Using the Pareto principle (80 percent of the defects can be
traced to 20 percent of all possible causes), isolate the 20
percent (the "vital few").
 Once the vital few causes have been identified, move to correct
the problems that have caused the defects.
Six Sigma for Software Engineering
Six sigma is the most widely used strategy for statistical software
quality assurance
 The term “six sigma” is derived from six standard deviations—
3.4 instances (defects) per million occurrences—implying an
extremely high quality standard.
 The Six Sigma methodology defines three core steps:
o Define customer requirements and deliverables and
project goals via well-defined methods of customer
communication
o Measure the existing process and its output to determine
current quality performance (collect defect metrics)
o Analyze defect metrics and determine the vital few
causes.
Risk Management / 187

o Improve the process by eliminating the root causes of


defects.
o Control the process to ensure that future work does not
reintroduce the causes of defects.
If new processes are being developed
 Design each new process to avoid root causes of defects and to
meet customer requirements
 Verify that the process model will avoid defects and meet
customer requirements

5.12. SOFTWARE RELIABILITY

Software reliability is defined as the probability of failure free


operation of a computer program in a specified environment for a
specified time.
To illustrate, program X is estimated to have a reliability of 0.96
over eight elapsed processing hours. In other words, if program X
were to be executed 100 times and require eight hours of elapsed
processing time (execution time), it is likely to operate correctly
(without failure) 96 times out of 100.
Can be measured directly and estimated using historical and
developmental data
Measures of Reliability and Availability
If we consider a computer-based system, a simple measure of
reliability is mean-time-between-failure (MTBF), where
MTBF = MTTF + MTTR
The acronyms MTTF and MTTR are mean-time-to-failure and mean-
time-to-repair, respectively.
Many researchers argue that MTBF is a far more useful measure than
defects/KLOC or defects/FP. stated simply, an end-user is concerned
with failures, not with the total error count.
Software availability is the probability that a program is operating
according to requirements at a given point in time and is defined as
188 / Introduction to Software Engineering

Availability = [MTTF/(MTTF + MTTR)] * 100%


The MTBF reliability measure is equally sensitive to MTTF and
MTTR. The availability measure is somewhat more sensitive to
MTTR, an indirect measure of the maintainability of software
Software Safety
Software safety is a software quality assurance activity that focuses
on the identification and assessment of potential hazards that may
affect software negatively and cause an entire system to fail.
If hazards can be identified early in the software engineering process,
software design features can be specified that will either eliminate or
control potential hazards.
Analysis techniques such as fault tree analysis, real-time logic or Petri
net models can be used to predict the chain of events that can cause
hazards and the probability that each of the events will occur to
create the chain.
Although software reliability and software safety are closely related
to one another, it is important to understand the subtle difference
between them. Software reliability uses statistical analysis to
determine the likelihood that a software failure will occur. However,
the occurrence of a failure does not necessarily result in a hazard or
mishap.
Software safety examines the ways in which failures result in
conditions that can lead to a mishap.

5.13. ISO 9000 QUALITY STANDARDS

A quality assurance system may be defined as the organizational


structure, responsibilities, procedures, processes, and resources for
implementing quality management
Quality assurance systems are created to help organizations ensure
their products and services satisfy customer expectations by meeting
their specifications.
ISO 9000 describes quality assurance elements in generic terms that
can be applied to any business regardless of the products or services
Risk Management / 189

offered
IS0 9001:200 is the quality assurance standard that applies to
software engineering. The standard contains 20 requirements that
must be present for an effective quality assurance system. Because
the ISO 9001:2000 standard is applicable to all engineering
disciplines a special set of ISO guidelines have been developed.
The requirements delineated by ISO 9001:2000 address topics
such as management responsibility, quality system, contract review,
design control, document and data control, product identification
and traceability, process control, inspection and testing, corrective
and preventive action, control of quality standards, internal quality
audits, training, servicing and statistical techniques. For a software
organization to become registered to IS0 9001:200, it must
establish policies and procedures to address each of the requirements
noted and then able to demonstrate that these policies and
procedures are being followed.


CHAPTER 6
Question Bank, Tutorial Questions
and Syllabus
KEY CONCEPTS
6.1. QUESTION BANK .......................................................... 190
6.2. TUTORIAL QUESTIONS ................................................. 196
6.3. SYLLABUS ...................................................................... 200

6.1. QUESTION BANK

UNIT ONE
1 Mark Questions
1. Define Software Engineering?
2. Define the term software.
3. What is specific Goal?
4. What do you mean by process pattern?
5. Define Process Assessment?
6. What do you mean by specific practices?
7. What do you mean by Intent and Pattern?
8. Give the applications of Software Engineering.
9. What is phase pattern?
10. What do you mean by CMMI?

5 Mark Questions
1. Explain the characteristics of Software.
2. Give the evolving nature of Software.
Question Bank & Tutorial Questions / 191

3. Explain the 5 frame work activities involved in SDLC?


4. Explain CMMI.
5. Explain in brief about PSP and TSP.
6. Explain about unified process model in brief.
7. What is the need of process model? What are the different
process models that are used commonly?
8. Explain Incremental Model.
9. In brief explain the characteristics of software.
10. Explain different layers in Software Engineering.

15 Mark Questions
1. What are different evolutionary process models? Explain in
brief with a neat sketch.
2. Explain in brief about specialized process model.
3. Briefly explain the umbrella activities with neat diagram.
4. Briefly explain the umbrella activities with a neat diagram.
5. What is a myth? Explain different types of Software myths.
6. With a neat sketch, explain RAD model.

UNIT TWO
1 Mark Questions
1. What is requirement engineering?
2. Define completeness.
3. Define consistency.
4. What is feasibility study?
5. Define open interview.
6. What is a view point?
192 / Introduction to Software Engineering

7. What is requirement change management process?


8. What is Referential attribute?
9. Define closed interview.
10. What do you know by Domain Point?

5 Mark Questions
1. What do you mean by requirement? Explain the different
types of requirements.
2. Explain about functional requirements with an example.
3. Explain about requirement validation.
4. Explain about context model with example.
5. With different notations, explain data flow model.
6. Explain state chart model.
7. Explain in brief about user requirements.
8. Briefly, with a neat diagram, explain structured methods.
9. Explain about system requirements.
10. Illustrate system requirements document.

15 Mark Questions
1. Briefly classify and explain the non –functional requirements.
2. Briefly with a neat diagram, explain requirement engineering
process.
3. With an example, explain data models.
4. Explain object models with examples.
5. Explain about requirement management.
Question Bank & Tutorial Questions / 193

UNIT THREE
1 Mark Questions
1. What is user interface design?
2. What is design engineering?
3. Define architecture?
4. Define object and a class?
5. What do you mean by interface?
6. What is meant by component?
7. Define design process?
8. Define design quality?
9. Define data design?
10. Name the three golden rules?

5 Marks Questions
1. Briefly explain design concepts?
2. With a neat diagram explain design evaluation?
3. Explain object constraint language?
4. Convert mapping data flow into software architecture?
5. Explain briefly about architectural design?
6. Explain the concept of designing conventional components?
7. Explain about interface analysis?
8. Briefly explain about architectural styles and patterns?
9. Explain about class based components?
10. Briefly explain interface design steps?
194 / Introduction to Software Engineering

15 Marks Questions
1. What is user interface design? Briefly explain about golden
rules?
2. Explain in detail about modeling component level design.
3. Explain in detail about Design engineering.
4. Explain in detail about interface design steps.
5. Briefly explain the concept of creating architectural design

UNIT FOUR
1 Mark Questions
1. What is testing?
2. What is a bug?
3. What is Software Quality?
4. What do you mean by process?
5. Define verification.
6. Define validation.
7. What is debugging?
8. Define metrics.
9. What do you mean by strategy in software testing?
10. Define product metrics.

5 Mark Questions
1. Explain the art of debugging with a neat diagram.
2. Explain black box testing.
3. Explain Unit testing with a diagram.
4. Explain about smoke testing with a diagram.
5. Explain the metrics of software quality.
Question Bank & Tutorial Questions / 195

6. Explain in detail about software measurement.


7. Explain about validation testing.
8. Explain about software quality in detail.
9. Explain the metrics for design model.
10. Explain the metrics for software quality.

15 Mark Questions
1. Explain the different types of testing strategies for
conventional software.
2. Explain in detail about product metrics.
3. Explain in detail about metrics for process and products.
4. Explain the strategic approach for software testing.
5. Explain in detail about system testing and regression testing
with neat diagrams.

UNIT FIVE
1 Mark Questions
1. What is Risk?
2. Define RMMM?
3. Abbreviate ISO.
4. Define reliability.
5. What do you mean by review?
6. What do you mean by SQA?
7. What do you mean by refinement of risk?
8. What do you mean by projection of risk?
9. What is software risk?
10. How many types of reviews are present in quality
196 / Introduction to Software Engineering

management?

5 Mark Questions
1. Explain with an example of reactive risk.
2. Explain in detail about risk identification.
3. Explain in detail about statistical SQA?
4. Briefly explain FTR.
5. Explain briefly risk refinement.
6. Explain in detail about proactive risk strategies.
7. Explain software quality assurance in detail.
8. Explain the different types of software reviews.
9. Explain risk projection in detail.
10. Explain the different quality concepts

15 Mark Questions
1. Briefly explain ISO 9000 quality standards.
2. With a neat diagram, explain RMMM plan.
3. Explain reactive vs proactive risk strategies in detail.
4. Explain quality management in detail.
5. Explain about risk management in detail.

6.2. TUTORIAL QUESTIONS

UNIT ONE
1. Explain the characteristics of Software.
2. Give the evolving nature of Software.
3. Explain the 5 frame work activities involved in SDLC?
4. Explain CMMI.
Question Bank & Tutorial Questions / 197

5. Explain in brief about PSP and TSP.


6. Explain about unified process model in brief.
7. Illustrate the need of process model? What are the different
process models that are used commonly?
8. Explain Incremental Model.
9. In brief explain the characteristics of software.
10. Explain different layers in Software Engineering.
11. Summarize different evolutionary process models? Explain in
brief with a neat sketch.
12. Explain in brief about specialized process model.
13. Briefly explain the umbrella activities with neat diagram.
14. Briefly explain the umbrella activities with a neat diagram.
15. Define a myth? Explain different types of Software myths.
16. With a neat sketch, explain RAD model.

UNIT TWO
1. Define requirement? Explain the different types of
requirements.
2. Explain about functional requirements with an example.
3. Explain about requirement validation.
4. Explain about context model with example.
5. With different notations, explain data flow model.
6. Explain state chart model.
7. Explain in brief about user requirements.
8. Briefly, with a neat diagram, explain structured methods.
9. Explain about system requirements.
10. Illustrate system requirements document.
198 / Introduction to Software Engineering

11. Briefly classify and explain the non –functional requirements.


12. Briefly with a neat diagram, explain requirement engineering
process.
13. With an example, explain data models.
14. Explain object models with examples.
15. Explain about requirement management.

UNIT THREE
1. With a neat diagram explain design evaluation?
2. Explain object constraint language?
3. Convert mapping data flow into software architecture?
4. Explain briefly about architectural design?
5. Explain the concept of designing conventional components?
6. Explain about interface analysis?
7. Briefly explain about architectural styles and patterns?
8. Explain about class based components?
9. Briefly explain interface design steps?
10. Define user interface design? Briefly explain about golden
rules?
11. Explain in detail about modeling component level design.
12. Explain in detail about Design engineering.
13. Explain in detail about interface design steps.
14. Briefly explain the concept of creating architectural design.
15. Briefly explain design concepts.

UNIT FOUR
1. Explain the art of debugging with a neat diagram.
Question Bank & Tutorial Questions / 199

2. Explain black box testing.


3. Explain Unit testing with a diagram.
4. Explain about smoke testing with a diagram.
5. Explain the metrics of software quality.
6. Explain in detail about software measurement.
7. Explain about validation testing.
8. Explain about software quality in detail.
9. Explain the metrics for design model.
10. Explain the metrics for software quality.
11. Explain the different types of testing strategies for
conventional software.
12. Explain in detail about product metrics.
13. Explain in detail about metrics for process and products.
14. Explain the strategic approach for software testing.
15. Explain in detail about system testing and regression testing
with neat diagrams.

UNIT FIVE
1. Explain with an example of reactive risk.
2. Explain in detail about risk identification.
3. Explain in detail about statistical SQA?
4. Briefly explain FTR.
5. Explain briefly risk refinement.
6. Explain in detail about proactive risk strategies.
7. Explain software quality assurance in detail.
8. Explain the different types of software reviews.
9. Explain risk projection in detail.
200 / Introduction to Software Engineering

10. Explain the different quality concepts.


11. Briefly explain ISO 9000 quality standards.
12. With a neat diagram, explain RMMM plan.
13. Explain reactive vs proactive risk strategies in detail.
14. Explain quality management in detail.
15. Explain about risk management in detail.

6.3. SYLLABUS

JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY


HYDERABAD
III Year B.Tech. CSE-I-Sem L T/P/D C
(A50518) SOFTWARE ENGINEERING
UNIT-I
Introduction to Software Engineering:
The evolving role of software, changing nature of software, legacy
software, software Myths.
A Generic view process:
Software Engineering –A layered Technology, A process frame
work, The Capability Maturity Model Integration (CMMI), Process
Patterns, Process Assessment, Personal and Team process models.
Process Models:
The water fall model, Incremental Process Models, Evolutionary
process models, specialized process models, The unified Process.
UNIT-II
Software Requirements:
Functional and Non Functional requirements, User Requirements,
System Requirements, Interface Specification, The Software
requirements document.
Question Bank & Tutorial Questions / 201

Requirements Engineering Process:


Feasibility Studies, Requirements elicitation& Analysis, Requirements
validation, Requirements Management
System models:
Context Models, Behavioral Models, data models, Object models,
structured methods
UNIT-III
Design Engineering:
Design Process and Design Quality, Design concepts, The design
Model, Pattern based Software Design.
Creating an Architectural Design:
Software Architecture, Data Design, Architectural styles and
Patterns, Architectural Design, Assessing alternative Architectural
design, Mapping data flow into a Software Architecture
Modeling Component-level design:
Designing class based components, conducting component –level
design, object constraint language, designing conventional
components.
Performing User Interface Design:
Golden rules, User interface Analysis and design, Interface Analysis,
Interface design steps, Design Evaluation.
UNIT-IV
Testing strategies:
A Strategic approach to software testing, Test strategies for
conventional Software, Black-Box Testing, White Box Testing,
validation Testing, System Testing, The Art of Debugging.
Product Metrics:
Software Quality, Frame work for Product metrics, Metrics for
Analysis Model, Metrics for Design Model, Metrics for Source code,
Metrics for testing, Metrics for Maintenance.
Metrics for Process and Products:
Software Measurement, Metrics for Software Quality
202 / Introduction to Software Engineering

UNIT-V
Risk Management:
Reactive Vs Proactive Risk Strategies, Software risks, Risk
Identification, Risk Projection, Risk refinement, RMMM, RMMM
Plan.
Quality Management:
Quality concepts, Software quality assurance, software reviews,
Formal technical reviews, Statistical Software quality assurance,
Software reliability, The ISO 9000 quality standards.
Text Books:
[1] Software Engineering, A practitioner’s Approach, Roger S
Pressman, sixth edition, McGraw-Hill International Edition.
[2] Software Engineering, Ian Sommerville, seventh edition, Pearson
education.
Reference Books:
[1] Software Engineering, A precise Approach, Pankaj Jalote, Wiley
India, 2010.
[2] Software Engineering: A primer, Warman S Jawadekar, tata
McGrawHill, 2008
[3] Fundamentals of Software Engineering, Rajib Mall, PHI, 2005
[4] Software Engineering, Principles and Practices, deepak jain,
Oxford University Press
[5] Software Enginnering1: Abstraction and Modeling, Diner
Bjorner, Springer International edition, 2006
[6] Software Engineering2: Specialization of systems and languages,
Diner Bjorner, Springer International edition,2006
[7] Software Engineering foundations, Yingxu wang, Auerbach
publications, 2008.
[8] Software Engineering principles and practice, Hans van Vliet,3rd
edition, Jhon Wiley&sons Ltd.
[9] Software Engineering3: Domains, Requirements and Software
Design, Diner Bjorner, Springer International edition.
[10] Introduction to software engineering, R.J.Leach, CRC Press.


View publication stats

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy