0% found this document useful (0 votes)
21 views27 pages

se-unit-5-1

Uploaded by

G Swathi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views27 pages

se-unit-5-1

Uploaded by

G Swathi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 27

Unit -5

CODING

 Coding is undertaken once the design phase is complete and the design documents have
been successfully reviewed.
 After all the modules of a system have been coded and unit tested, the integration and
system testing phase is undertaken.
OBJECTIVE OF CODING:
The objective of the coding phase is to transform the design of a system into code in
a high-level language, and then to unit test this code.
The main advantages of standard style of coding are the following:
 A coding standard gives a uniform appearance to the codes written by different engineers.
 It facilitates code understanding and code reuse.
 It promotes good programming practices.
Coding Standards and Guidelines
Good software development organisations usually develop their own coding
standards and guidelines depending on what suits their organisation best and based on the
specific types of software they develop.
Representative coding standards:
Rules for limiting the use of globals:
 These rules list what types of data can be declared global and what cannot, with a
view to limit the data that needs to be defined with global scope.
Standard headers for different modules:
 The header of different modules should have standard format and information for
ease of understanding and maintenance.
The following is an example of header format that is being used in some companies:
Name of the module.
Date on which the module was created.
Author’s name.
Modification history.
Synopsis of the module.(This is a small writeup about what the module does).
Different functions supported in the module, along with their input/output parameters.
Global variables accessed/modified by the module.
Naming conventions for global variables, local variables, and constant identifiers:
A popular naming convention is that variables are named using mixed case lettering.
 Global variable names would always start with a capital letter (e.g., GlobalData)
 local variable names start with small letters (e.g., localData).
 Constant names should be formed using capital letters only (e.g., CONSTDATA).
Conventions regarding error return values and exception handling mechanisms:
 The way error conditions are reported by different functions in a program
should be standard within an organisation.
 For example, all functions while encountering an error condition should
either return a 0 or 1 consistently, independent of which programmer has
written the code.
 This facilitates reuse and debugging.
Representative coding guidelines:
 The following are some representative coding guidelines that are
recommended by many software development organisations.
Do not use a coding style that is too clever or too difficult to understand:
 Code should be easy to understand. Many inexperienced engineers actually
take pride in writing cryptic and incomprehensible code.
 Clever coding can obscure meaning of the code and reduce code
understandability; thereby making maintenance and debugging difficult and
expensive.
Do not use an identifier for multiple purposes:
 Programmers often use the same identifier to denote several temporary
entities. For example, some programmers make use of a temporary loop
variable for also computing and storing the final result.
 Some of the problems caused by the use of a variable for multiple purposes
are as follows:
Each variable should be given a descriptive name indicating its purpose.
 Use of a variable for multiple purposes can lead to confusion and make it
difficult for somebody trying to read and understand the code.
Code should be well-documented:
 As a rule of thumb, there should be at least one comment line on the average
for every three source lines of code.

Length of any function should not exceed 10 source lines:


 A lengthy function is usually very difficult to understand as it probably has a
large number of variables and carries out many different types of
computations.
Do not use GO TO statements:
 Use of GO TO statements makes a program unstructured. This makes the
program very difficult to understand, debug, and maintain.

CODE REVIEW
 Testing is an effective defect removal mechanism. However, testing is applicable to only
executable code.
 Review is a very ffective technique to remove defects from source code.
 Review has been acknowledged to be more cost-effective in removing defects as compared
to testing.
 Over the years, review techniques have become extremely popular Code review for a
module is undertaken after the module successfully compiles.
 That is, all the syntax errors have been eliminated from the module.
 Obviously, code review does not target to design syntax errors in a program, but is designed
to detect logical, algorithmic, and programming errors.
 Code review has been recognised as an extremely cost- effective strategy for eliminating
coding errors and for producing high quality code.
 Normally, the following two types of reviews are carried out on the code of a module:
Code inspection.
Code walkthrough.

Code walkthrough
 The main objective of code walkthrough is to discover the algorithmic and logical errors in
the code.
 Code walkthrough is an informal code analysis technique.
 In this technique, a module is taken up for review after the module has been coded,
successfully compiled, and all syntax errors have been liminated.
 A few members of the development team are given the code a couple of days before the
walkthrough meeting.
 Each member selects some test cases and simulates execution of the code by hand (i.e.,
traces the execution through different statements and functions of the code).
 The members note down their findings of their walkthrough and discuss those in a
walkthrough meeting where the coder of the module is present.
 Code walkthrough is an informal analysis technique, These guidelines are based on personal
experience, common sense, several other subjective factors.
 Some of these guidelines are following
The team performing code walkthrough should not be either too big or too small.
Ideally, it should consist of between three to seven members.
Discussions should focus on discovery of errors and avoid deliberations on how to
fix the discovered errors.

Code Inspection
 During code inspection, the code is examined for the presence of some common
programming errors.
 The inspection process has several beneficial side effects, other than finding errors. The
programmer usually receives feedback on programming style, choice of algorithm, and
programming techniques.
 The other participants gain by being exposed to another programmer’s errors.
 Following is a list of some classical programming errors which can be
 checked during code inspection:
 Use of uninitialized variables.
 Jumps into loops.
 Non-terminating loops.
 Incompatible assignments.
 Array indices out of bounds.
 Improper storage allocation and de allocation.
 Mismatch between actual and formal parameter in procedure calls.
 Use of incorrect logical operators or incorrect precedence among operators.
 Improper modification of loop variables.
 Comparison of equality of floating point values.
 Dangling reference caused when the referenced memory has not been allocated.

Clean Room Testing


 Clean room testing was pioneered at IBM. This type of testing relies heavily on
walkthroughs, inspection, and formal verification.
 The programmers are not allowed to test any of their code by executing the code other than
doing some syntax testing using a compiler.

SOFTWARE DOCUMENTATION
 When a software is developed, in addition to the executable files and the source code,
several kinds of documents such as users’ manual, software requirements specification
(SRS) document, design document, test document, installation manual, etc., are developed
as part of the software engineering process.
 All these documents are considered a vital part of any good software development practice.
 Good documents are helpful in the following ways:

 Good documents help enhance understandability of code. As a result, the availability of


good documents help to reduce the effort and ti
 me required for maintenance.
 Documents help the users to understand and effectively use the system.
 Good documents help to effectively tackle the manpower turnover problem. Even when
an engineer leaves the organisation, and a new engineer comes in, he can build up the
required knowledge easily by referring to the documents.
 Production of good documents helps the manager to effectively track the progress of the
project.
 Different types of software documents can broadly be classified into the following:
 Internal documentation: These are provided in the source code itself.
 External documentation: These are the supporting documents such as SRS
document, installation document, user manual, design document, and test
document.
Internal Documentation
 Internal documentation is the code comprehension features provided in the source code
itself. Internal do cumentation can be provided in the code in several forms.
 The important types of internal documentation are the following:
 Comments embedded in the source code.
 Use of meaningful variable names.
 Module and function headers.
 Code indentation.
 Code structuring (i.e., code decomposed into modules and functions).
 Use of enumerated types.
 Use of constant identifiers.
 Use of user-defined data types.

 For example, the following style of code commenting is not much of a help in understanding
the code.
a=10; /* a made 10 */
External Documentation
 External documentation is provided through various types of supporting documents such as
users’ manual, software requirements specification document, design document, test
document, etc.
 A systematic software development style ensures that all these documents are of good
quality and are produced in an orderly fashion.
 An important feature that is required of any good external documentation is consistency
with the code.
 If the different documents are not consistent, a lot of confusion is created for somebody
trying to understand the software.
 Gunning’s fog index
 Gunning’s fog index (developed by Robert Gunning in 1952) is a metric that has been
designed to measure the readability of a document.
 The Gunning’s fog index of a document D can be computed as follows:

TESTING
 The aim of program testing is to help identify all defects in a program.
 However, in practice, even after satisfactory completion of the testing phase, it is not
possible to guarantee that a program is error free.
Basic Concepts and Terminologies
How to test a program?
 Testing a program involves executing the program with a set of test inputs and observing if
the program behaves as expected.
 If the program fails to behave as expected, then the input data and the conditions under
which it fails are noted for later debugging and error correction.
 A simplified view of program testing is following

Terminologies
 A mistake is essentially any programmer may commit a mistake in almost any development
activity. For example, during coding a programmer might commit the mistake of not
initializing a certain variable, or might overlook the errors that might arise in some
exceptional situations such as division by zero in an arithmetic operation. Both these
mistakes can lead to an incorrect result.
 An error is the result of a mistake committed by a developer in any of the development
activities. Among the extremely large variety of errors that can exist in a program. One
example of an error is a call made to a wrong function.
 Though the terms error, fault, bug, and defect are all used interchangeably by the program
testing community.
 A failure of a program essentially denotes an incorrect behavior exhibited by the program
during its execution. An incorrect behavior is observed either as an incorrect result produced
or as an inappropriate activity carried out by the program.
 The following we give three randomly selected examples:
– The result computed by a program is 0, when the correct result is 10.
– A program crashes on an input.
– A robot fails to avoid an obstacle and collides with it.
 A test case is a triplet [I , S, R], where I is the data input to the
program under test, S is the state of the program at which the data is
to be input, and R is the result expected to be produced by the program. The state of a
program is also called its execution mode.
An example of a test case is—
[input: “abc”, state: edit, result: abc is displayed],
which essentially means that the input abc needs to be applied in the edit mode, and the
expected result is that the string a b c would be displayed.
 A test scenario is an abstract test case in the sense that it only
identifies the aspects of the program that are to be tested without
identifying the input, state, or output. A test case can be said to be an
implementation of a test scenario.
 A test script is an encoding of a test case as a short program. Test scripts are developed for
automated execution of the test cases.
 A test case is said to be a positive test case if it is designed to test whether the software
correctly performs a required functionality. A test case is said to be negative test case, if it
is designed to test whether the software carries out something, that is not required of the
system.
 Testability of a requirement denotes the extent to which it is possible to determine whether
an implementation of the requirement conforms to it in both functionality and performance.
 A failure mode of a software denotes an observable way in which it can fail. As an
example of the failure modes of a software, consider a railway ticket booking software that
has three failure modes — failing to book an available seat, incorrect seat booking (e.g.,
booking an already booked seat), and system crash.
 Equivalent faults denote two or more bugs that result in the system failing in the same
failure mode consider the following two faults in C language—division by zero and illegal
memory access errors. These two are equivalent faults, since each of these leads to a
program crash.
Verification versus validation
 The objectives of both verification and validation techniques are very similar since both
these techniques are designed to help remove errors in a software.
 Verification is the process of determining whether the output of one phase of software
development conforms to that of its previous phase;
 For example, a verification step can be to check if the design documents produced after the
design step conform to the requirements specification.
 validation is the process of determining whether a fully developed software conforms to its
requirements specification.
 validation is applied to the fully developed and integrated software to check if it satisfies the
customer’s requirements.
 The primary techniques used for verification include review, simulation, formal verification,
and testing. Review, simulation, and testing are usually considered as informal verification
techniques.
 Formal verification usually involves use of theorem proving techniques or use of automated
tools such as a model checker.
 validation techniques are primarily based on product testing.
 Verification does not require execution of the software, whereas validation requires
execution of the software.
 Verification is carried out during the development process to check if the development
activities are proceeding alright, whereas validation is carried out to check if the right as
required by the customer has been developed.
 Verification techniques can be viewed as an attempt to achieve phase containment of errors.
Phase containment of errors has been acknowledged to be a cost-effective way to eliminate
program bugs, and is an important software engineering principle.
Testing Activities
Testing involves performing the following main activities:
 Test suite design: The set of test cases using which a program is to be tested is designed
possibly using several test case design techniques.
 Running test cases and checking the results to detect failures: Each test case is run and
the results are compared with the expected results. A mismatch between the actual result
and expected results indicates a failure. The test cases for which the system fails are noted
down for later debugging.
 Locate error: In this activity, the failure symptoms are analysed to locate the errors. For
each failure observed during the previous activity, the statements that are in error are
identified.
 Error correction: After the error is located during debugging, the code is appropriately
changed to correct the error.

(Testing process)

Why Design Test Cases?


 To reduce testing cost and at the same time to make testing more effective, systematic
approaches have been developed to design a small test suite that can detect most, if not all
failures.
 A minimal test suite is a carefully designed set of test cases such that each test case helps
detect different errors.

 There are essentially two main approaches to systematically design test cases:
 Black-box approach
 White-box (or glass-box) approach
Testing in the Large versus Testing in the Small
 A software product is normally tested in three levels or stages:
 Unit testing
 Integration testing
 System testing
 Unit testing is referred to as testing in the small, whereas integration and
system testing are referred to as testing in the large.
UNIT TESTING
 Unit testing is undertaken after a module has been coded and reviewed.
 During unit testing, the individual functions (or units) of a program are tested.
Driver and stub modules
 In order to test a single module, we need a complete environment to provide all relevant
code that is necessary for execution of the module.
 That is, besides the module under test, the following are needed to test the module:
 The procedures belonging to other modules that the module under test calls.
 Non-local data structures that the module accesses.
 A procedure to call the functions of the module under test with appropriate parameters.
 Modules required to provide the necessary environment (which either call or are
called by the module under test) are usually not available until they too have been
unit tested.
 In this context, stubs and drivers are designed to provide the complete environment
for a module so that testing can be carried out.
Stub:
 The role of stub and driver modules are shown in following figure.
 A stub procedure is a dummy procedure that has the same I/O parameters asthe function
called by the unit under test but has a highly simplified

(Unit testing with the help of driver and stub modules)

Driver: A driver module should contain the non-local data structures


accessed by the module under test. Additionally, it should also have the code to call the
different functions of the unit under test with appropriate parameter values for testing.
BLACK-BOX TESTING
 In black-box testing, test cases are designed from an examination of the input/output values
only and no knowledge of design or code is required.
 The following are the two main approaches available to design black box test cases:
 Equivalence class partitioning
 Boundary value analysis
Equivalence Class Partitioning
 The main idea behind defining equivalence classes of input data is that testing the code with
any one value belonging to an equivalence class is as good as testing the code with any
other value belonging to the same equivalence class.
 Equivalence classes for a unit under test can be designed by examining the input data and
output data.
 The following is general guidelines for designing the equivalence classes:
1. If the input data values to a system can be specified by a range of values, then one valid
and two invalid equivalence classes need to be defined.
 For example, if the equivalence class is the set of integers in the range 1 to 10
(i.e., [1,10]), then the invalid equivalence classes are [−∞,0], [11,+∞].

(The equivalence classes are palindromes, non-palindromes, and valid and invalid inputs.)
Example for Equivalence class Partitioning
Boundary Value Analysis
 Boundary value analysis-based test suite design involves designing test cases using the
values at the boundaries of different equivalence classes.
 To design boundary value test cases, it is required to examine the equivalence classes to
check if any of the equivalence classes contains a range of values.
 For those equivalence classes that are not a range of values (i.e., consist of a discrete
collection of values) no boundary value test cases can be defined.
 For example, if an equivalence class contains the integers in the range 1 to 10, then the
boundary value test suite is {0,1,10,11}.

Summary of the Black-box Test Suite Design Approach


We now summarise the important steps in the black-box test suite design approach:
 Examine the input and output values of the program.
 Identify the equivalence classes.
Design equivalence class test cases by picking one representative value from each
equivalence class.
 Degn the boundary value test cases as follows.
Examine if any equivalence class is a range of values. Include the values at the boundaries
of such equivalence classes in the test suite.
WHITE-BOX TESTING
 White-box testing is an important type of unit testing. A large number of white-box testing
strategies exist. Each testing strategy essentially designs test cases based on analysis of
some aspect of source code and is based on some heuristic.
Basic Concepts
 A white-box testing strategy can either be coverage-based or fault based.
Fault-based testing
 A fault-based testing strategy targets to detect certain types of faults. These
faults that a test strategy focuses on constitutes the fault model of the
strategy.
 An example of a fault-based strategy is mutation testing.
Coverage-based testing
 A coverage-based testing strategy attempts to execute (or cover)
certain elements of a program. Popular examples of coverage-based
testing strategies are statement coverage, branch coverage, multiple
condition coverage, and path coverage-based testing.

Statement Coverage
 The statement coverage strategy aims to design test cases so as to
execute every statement in a program at least once.

int computeGCD(x,y)
int x,y;
{
1 while (x != y){
2 if (x>y) then
3 x=x-y;
4 else y=y-x;
5}
6 return x;
}
Answer: To design the test cases for the statement coverage, the conditional expression of
the while statement needs to be made true and the conditional expression of the if statement
needs to be made both true and false. By choosing the test set {(x = 3, y = 3), (x = 4, y = 3),
(x = 3, y = 4)}, all statements of the program would be executed at least once.
Branch Coverage
 A test suite satisfies branch coverage, if it makes each branch condition in the
program to assume true and false values in turn. In other words, for branch coverage
each branch in the CFG representation of the program must be taken at least once,
when the test suite is executed.
 Branch testing is also known as edge testing, since in this testing scheme, each
edge of a program’s control flow graph is traversed at least once.
Multiple Condition Coverage
 In the multiple condition (MC) coverage-based testing, test cases are designed to
make each component of a composite conditional expression to assume both true
and false values. For example, consider the composite conditional expression
((c1 .and.c2 ).or.c3).
Path Coverage
 A test suite achieves path coverage if it exeutes each linearly independent paths (
o r basis paths ) at least once.
 A linearly independent path can be defined in terms of the control flow graph
(CFG) of a program.
Control flow graph (CFG)
 A control flow graph describes the sequence in which the different instructions
of a program get executed.
(Control flow diagram of an example program)
Path
 A path through a program is any node and edge sequence from the start node to a
terminal node of the control flow graph of a program.
 For example, the above figure shows there can be an infinite number of paths
such as 12314, 12312314, 12312312314, etc.
McCabe’s Cyclomatic Complexity Metric
 It is used to indicate the complexity of a program.
 McCabe obtained his results by applying graph-theoretic techniques to the
control flow graph ofa program. McCabe’s cyclomatic complexity defines an
upper bound on the number of independent paths in a program. a control flow
graph G of a program, the cyclomatic complexity V(G) can be computed as:
V(G) = E – N + 2
 where, N is the number of nodes of the control flow graph and E is the number of
edges in the control flow graph.
 For the CFG of example shown in above figure E = 7 and N = 6. Therefore, the
value of the Cyclomatic complexity = 7 – 6 + 2 = 3.
Uses of McCabe’s cyclomatic complexity metric
 Estimation of structural complexity of code:
 Estimation of testing effort
 Estimation of program reliability
Steps to carry out path coverage-based testing
The following is the sequence of steps that need to be undertaken for deriving the path
coverage-based test cases for a program:
1. Draw control flow graph for the program.
2. Determine the McCabe’s metric V(G).
3. Determine the cyclomatic complexity. his gives the minimum number of test cases
required to achieve path coverage.
4. repeat
Data Flow-based Testing
 Data flow based testing method selects test paths of a program according to the
definitions and uses of different variables in a program.
 Consider a program P . For a statement numbered S of P , let
DEF(S) = {X /statement S contains a definition of X } and
USES(S)= {X /statement S contains a use of X }
 For the statement S: a=b+c;, DEF(S)={a}, USES(S)={b, c}.
 The definition of variable X at statement S is said to be live at statement S1 ,
if there exists a path from statement S to statement S1 which does not contain
any definition of X.
Mutation Testing
 Mutation testing is a fault-based testing technique in the sense that mutation
test cases are designed to help detect specific types of faults in a program.
 In mutation testing, a program is first tested by using an initial test suite
designed by using various white box testing strategies that we have
discussed.
 After the initial testing is complete, mutation testing can be taken up.
 The idea behind mutation testing is to make a few arbitrary changes to a
program at a time. Each time the program is changed, it is called a mutated
program and the change effected is called a mutant.

INTEGRATION TESTING
 Integration testing is carried out after all (or at least some of ) the modules
have been unit tested. Successful completion of unit testing, to a large extent,
ensures that the unit (or module) as a whole works satisfactorily.
 In this context, the objective of integration testing is to detect the errors at the
module interfaces (call parameters).
The objective of integration testing is to check whether the different modules of a
program interface with each other properly.
 During integration testing, different modules of a system are integrated in a planned
manner using an integration plan.
 The integration plan specifies the steps and the order in which modules are
combined to realise the full system.
 After each integration step, the partially integrated system is tested.
 An important factor that guides the integration plan is the module dependency graph.
the following approaches can be used to develop the test plan:
 Big-bang approach to integration testing
 Top-down approach to integration testing
 Bottom-up approach to integration testing
 Mixed (also called sandwiched ) approach to integration testing
Big-bang approach to integration testing
 Big-bang testing is the most obvious approach to integration testing.
 In this approach, all the modules making up a system are integrated in a single step.
 In simple words, all the unit tested modules of the systemare simply linked together and
tested.
 However, this technique can Meanin gfully be used only for very small systems.
 The main problem with this approach is that once a failure has been detected during
integration testing, it is very difficult to localise the error as the error may potentially lie in
any of the modules.
 Therefore, debugging errors reported during big-bang integration testing are very expensive
to fix.
 As a result, big-bang integration testing is almost never used for large programs.

Bottom-up approach to integration testing


 Large software products are often made up of several subsystems.
 A subsystem might consist of many modules which communicate among each other through
well-defined interfaces.
 In bottom-up integration testing, first the modules for the each subsystem are integrated.
 Thus, the subsystems can be integrated separately and independently.
 The primary purpose of carrying out the integration testing a subsystem is to test whether
the interfaces among various modules making up the subsystem work satisfactorily.
 The test cases must be carefully chosen to exercise the interfaces in all possible manners.
 In a pure bottom-up testing no stubs are required, and only test-drivers are required.
 Large software systems normally require several levels of subsystem testing, lower-level
subsystems are successively combined to form higher-level subsystems.
 The principal advantage of bottom- up integration testing is that several disjoint subsystems
can be tested simultaneously.
 Another advantage of bottom-up testing is that the low-level modules get tested thoroughly.

Top-down approach to integration testing


 Top-down integration testing starts with the root module in the structure chart and one or
two subordinate modules of the root module.
 After the top-level ‘skeleton’ has been tested, the modules that are at the immediately lower
layer of the ‘skeleton’ are combined with it and tested.
 Top-down integration testing approach requires the use of
program stubs to simulate the effect of lower-level routines that are called by the
routines under test.
 A pure top-down integration does not require any driver routines.
 An advantage of top-down integration testing is that it requires writing only stubs, and stubs
are simpler to write compared to drivers.
 A disadvantage of the top-down integration testing approach is that in the absence of lower-
level routines, it becomes difficult to exercise the top-level routines in the desired manner
since the lower level routines usually perform input/output (I/O) operations.

Mixed approach to integration testing


 The mixed (also called sandwiched ) integration testing follows a
combination of top-down and bottom-up testing approaches.
 In topdown approach, testing can start only after the top-level modules have been coded and
unit tested.
 Similarly, bottom-up testing can start only after the bottom level modules are ready. The
mixed approach overcomes this shortcoming of the top-down and bottom-up approaches.
 In the mixed testing approach, testing can start as and when modules become available after
unit testing.
 Therefore, this is one of the most commonly used integration testing approaches.
 In this approach, both stubs and drivers are required to be designed.

TESTING OBJECT-ORIENTED PROGRAMS


 During the initial years of object-oriented programming, it was believed that object-
orientation would, to a great extent, reduce the cost and effort incurred on testing.
 This thinking was based on the observation that object-orientation incorporates
several good programming features such as encapsulation, abstraction, reuse through
inheritance, polymorphism, etc., thereby chances of errors in the code is minimised.
 However, it was soon realised that satisfactory testing object-oriented programs is
much more difficult and requires much more cost and effort as compared to testing
similar procedural programs.
 The main reason behind this situation is that various object-oriented features
introduce additional complications and scope of new types of bugs that are present in
procedural programs. Therefore additional test cases are needed to be designed to
detect these.
 What is a Suitable Unit for Testing Object-oriented Programs?
 Do Various Ob ject-orientation Features Make Testing Easy?
 Why are Traditional Techniques Considered Not Satisfactory for Testing
Object-oriented Programs
 Grey-Box Testing of Object-oriented Programs
 Integration Testing of Object-oriented Programs
What is a Suitable Unit for Testing Object-oriented Programs?
 For procedural programs, the procedures are the basic units of testing. That is, first all the
procedures are unit tested.
 Then various tested procedures are integrated together and tested.
 Thus, as far as procedural programs are concerned, procedures are the basic units of testing.
 A method has to be tested with all the other methods and data of the corresponding object.
Moreover, a method needs to be tested at all the states that the object can assume. As a
result, it is improper to consider a method as the basic unit of testing an object-oriented
program.
 An object is the basic unit of testing of object-oriented programs.
Do Various Object-orientation Features Make Testing Easy?
 Encapsulation
 Inheritance:
 Dynamic binding
 Object states
Encapsulation
 The encapsulation feature helps in data abstraction, error isolation, and error
prevention. Encapsulation prevents the tester from accessing the data internal to an
object.
Inheritance:
 The inheritance feature helps in code reuse and was expected
to simplify testing.
 Even if the base class class has been thoroughly tested, the methods inherited from
the base class need to be tested again in the derived class.
Dynamic binding:
 Dynamic binding was introduced to make the code compact, elegant, and easily
extensible.
 However, as far as testing is concerned all possible bindings of a method call have to
be identified and tested. This is not easy since the bindings take place at run-time.
Object states:
 In contrast to the procedures in a procedural program,objects store data permanently.
As a result, objects do have significantstates.
 The behaviour of an object is usually different in different states.
 That is, some methods may not be active in some of its states. Also, a method may
act differently in different states.
 For example, when a book has been issued out in a library information system, the
book reaches the issuedOut state.
 In this state, if the issue method is invoked, then it may not exhibit its normal
behaviour.
Why are Traditional Techniques Considered Not Satisfactory for Testing Object-
oriented Programs
 In traditional procedural programs, procedures are the basic unit of testing. In
contrast, objects are the basic unit of testing for object-oriented programs.
 Statement coverage-based testing which is popular for testing procedural programs is
not meaningful for object-oriented programs. The reason is that inherited methods
have to be retested in the derived class.
 In fact, the different object- oriented features (inheritance, polymorphism, dynamic
binding, state-based behaviour, etc.) require special test cases to be designed
compared to the traditional testing.
 So, Traditional Techniques Considered Not Satisfactory for Testing Object-oriented
Programs.
Grey-Box Testing of Object-oriented Programs
 The following are some important types of grey-box testing that can be carried on based on
UML models:
State-model-based testing
State coverage: Each method of an object are tested at each state of the object.
State transition coverage: It is tested whether all transitions depicted in the state
model work satisfactorily.
State transition path coverage: All transition paths in the state model are tested.
Use case-based testing
 Scenario coverage: Each use case typically consists of a mainline scenario and
several alternate scenarios. For each use case, the mainline and all alternate
sequences are tested to check if any errors show up.
Class diagram-based testing
 Testing derived classes: All derived classes of the base class have to
be instantiated and tested. In addition to testing the new methods
defined in the derivec. lass, the inherited methods must be retested.
 Association testing: All association relations are tested.
 Aggregation testing: Various aggregate objects are created and tested.
Sequence diagram-based testing
 Method coverage: All methods depicted in the sequence diagrams are
covered.
 Message path coverage: All message paths that can be constructed from the
sequence diagrams are covered.
Integration Testing of Object-oriented Programs
 There are two main approaches to integration testing of object-oriented
 programs:
• Thread-based
• Use based
Thread-based approach:
 In this approach, all classes that need to collaborate to realise the behaviour of a
single use case are integrated and tested.
 After all the required classes for a use case are integrated and tested, another use
case is taken up and other classes (if any) necessary for execution of the second use
case to run are integrated and tested. This is continued till all use cases have been
considered.
Use-based approach:
 Use-based integration begins by testing classes that either need no service
from other classes or need services from at most a few other classes.
 After these classes have been integrated and tested, classes that use the
services from the already integrated classes are integrated and tested.
 This is continued till all the classes have been integrated and tested.

SYSTEM TESTING
 After all the units of a program have been integrated together and tested, system testing is
taken up.
 System tests are designed to validate a fully developed system to assure that it meets its
requirements. The test cases are therefore designed solely based on the SRS document.
 The system testing procedures are the same for both object-oriented and procedural
programs, since system test cases are designed based on the SRS document and the actual
implementation (procedural or object oriented) is immaterial.
 There are essentially three main kinds of system testing depending on who carries out
testing:
1. Alpha Testing: Alpha testing refers to the system testing carried out by the test team
within the developing organisation.
2. Beta Testing: Beta testing is the system testing performed by a
select group of friendly customers.
3. Acceptance Testing: Acceptance testing is the system testing
performed by the customer to determine whether to accept the
delivery of the system.

Smoke Testing
 Smoke testing is carried out before initiating system testing to ensure that system testing
would be meaningful, or whether many parts of the software would fail.
 The idea behind smoke testing is that if the integrated program cannot pass even the basic
tests, it is not ready for a vigorous testing.
 For smoke testing, a few test cases are designed to check whether the basic functionalities
are working.
 For example, for a library automation system, the smoke tests may check whether books can
be created and deleted, whether member records can be created and deleted, and whether
books
can be loaned and returned.
Performance Testing
 Performance testing is carried out to check whether the system meets the nonfunctional
requirements identified in the SRS document.
 Performance testing is an important type of system testing.
 There are several types of performance testing corresponding to various types of non-
functional requirements.
Stress testing
 Stress testing is also known as endurance testing. Stress testing evaluates system
performance when it is stressed for short periods of time.
 Stress tests are black-box tests which are designed to impose a range of abnormal
and even illegal input conditions so as to stress the capabilities of the software.
 Input data volume, input data rate, processing time, utilisation of memory, etc., are
tested beyond the designed capacity.
 For example, suppose an operating system is supposed to support fifteen concurrent
transactions, then the system is stressed by attempting to initiate fifteen or more
transactions simultaneously.
Volume testing
 Volume testing checks whether the data structures (buffers, arrays, queues, stacks,
etc.) have been designed to successfully handle extraordinary situations.
 For example, the volume testing for a compiler might be to check whether the
symbol table overflows when a very large program is compiled.
Configuration testing
 Configuration testing is used to test system behaviour in various hardware and
software configurations specified in the requirements.
 Sometimes systems are built to work in different configurations for different users.
 For instance, a minimal system might be required to serve a single user, and other
extended configurations may be required to serve additional users during
configuration testing.
 The system is configured in each of the required configurations and depending on
the specific customer requirements, it is checked if the system behaves correctly in
all required configurations.
Compatibility testing
 This type of testing is required when the system interfaces with external systems
(e.g., databases, servers, etc.).
 Compatibility aims to check whether the interfaces with the external systems are
performing as required.
 For instance, if the system needs to communicate with a large database system to
retrieve information, compatibility testing is required to test the speed and accuracy
of data retrieval.
Regression testing
 This type of testing is required when a software is maintained to fix some bugs or
enhance functionality, performance, etc.
Recovery testing
 Recovery testing tests the response of the system to the presence of faults, or loss of
power, devices, services, data, etc. The system is subjected to the loss of the
mentioned resources (as discussed in theSRS document) and it is checked if the
system recovers satisfactorily.
 For example, the printer can be disconnected to check if the system hangs. Or, the
power may be shut down to check the extent of data loss and corruption.
Maintenance testing
 This addresses testing the diagnostic programs, and other procedures that are
required to help maintenance of the system. It is verified that the artifacts exist and
they perform properly.

Documentation testing
 It is checked whether the required user manual, maintenance manuals, and technical
manuals exist and are consistent.
 If the requirements specify the types of audience for which a specific manual should
be designed, then the manual is checked for compliance of this requirement.
Usability testing
 Usability testing concerns checking the user interface to see if it meets all user
requirements concerning the user interface.
 During usability testing, the display screens, messages, report formats, and other
aspects relating to the user interface requirements are tested.
 A GUI being just being functionally correct is not enough.
Security testing
 Security testing is essential for software that handle or process confidential data that
is to be gurarded against pilfering.
 It needs to be tested whether the system is fool-proof from security attacks such as
intrusion by hackers.
 Over the last few years, a large number of security testing techniques have been
proposed, and these include password cracking, penetration testing, and attacks on
specific ports, etc.
Error Seeding
 Sometimes customers specify the maximum number of residual errors that can be
present in the delivered software. T
 hese requirements are often expressed in terms of maximum number of allowable
errors per line of source code.
 The error seeding technique can be used to estimate the number of residual errors in
a software. Error seeding, as the name implies, it involves seeding the code with
some known error.
 In other words, some artificial errors are introduced (seeded) into the program. The
number of these seeded errors that are detected in the course of standard testing is
determined.
 These values in conjunction with the number of unseeded errors detected during
testing can be used to predict the following aspects of a program:
 The number of errors remaining in the product.
 The effectiveness of the testing strategy.
 Let N be the total number of defects in the system, and let n of these
defects be found by testing.
 Let S be the total number of seeded defects, and let s of these defects be found
during testing. Therefore, we get:

 Defects still remaining in the program after testing can be given by:

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy