UNIT V-Coding & Testing
UNIT V-Coding & Testing
In the coding phase, every module specified in the design document is coded and unit
tested.
During unit testing, each module is tested in isolation from other modules. That is, a
module is tested independently as and when it’s coding is complete.
After all the modules of a system have been coded and unit tested, the integration and
system testing phase is undertaken.
Integration and testing of modules is carried out according to an integration plan. The
integration plan, according to which different modules are integrated together through
a number of steps.
During each integration step, a number of modules are added to the partially integrated
system and the resultant system is tested.
The full product takes shape only after all the modules have been integrated together.
System testing is conducted on the full product.
During system testing, the product is tested against its requirements as recorded in the
SRS document.
Testing of professional software is carried out using a large number of test cases.
5.1 CODING
The input to the coding phase is the design document produced at the end of the design
phase.
Design document contains not only the high-level design of the system in the form of a
module structure (e.g., a structure chart), but also the detailed design.
The detailed design is usually documented in the form of module specifications where
the data structures and algorithms for each module are specified.
During the coding phase, different modules identified in the design document are
coded according to their respective module specifications.
The objective of the coding phase is to transform the design of a system into code in
a high-level language, and then to unit test this code.
The main advantages of adhering to a standard style of coding are the following:
o A coding standard gives a uniform appearance to the codes written by different
engineers.
o It facilitates code understanding and code reuse.
o It promotes good programming practices.
A coding standard lists several rules to be followed during coding; way variables are to
be named, the way the code is to be laid out, the error return conventions,
General coding standards and guidelines that are commonly adopted by many software
development organisations, rather than trying to provide an exhaustive list.
1. Rules for limiting the use of globals: These rules list what types of data can be declared
global and what cannot, with a view to limit the data that needs to be defined with global
scope.
2. Standard headers for different modules: The header of different modules should have
standard format and information for ease of understanding and maintenance.
The following is an example of header format that is being used in some companies:
1. Name of the module.
2. Date on which the module was created.
3. Author’s name.
4. Modification history.
5. Synopsis of the module. This is a small writeup about what the module does.
6. Different functions supported in the module, along with their
7. input/output parameters.
8. Global variables accessed/modified by the module.
3. Naming conventions for global variables, local variables, and constant identifiers:
A popular naming convention is that variables are named using mixed case lettering.
Global variable names would always start with a capital letter (e.g., GlobalData) and
However, there are several things wrong with this approach and hence should be
avoided.
Some of the problems caused by the use of a variable for multiple purposes are as
follows:
o Each variable should be given a descriptive name indicating its purpose. This is
not possible if an identifier is used for multiple purposes.
o Use of a variable for multiple purposes can lead to confusion and make it
difficult for somebody trying to read and understand the code.
o Use of variables for multiple purposes usually makes future enhancements more
difficult.
o For example, while changing the final computed result from integer to float
type, the programmer might subsequently notice that it has also been used as a
temporary loop variable that cannot be a float type.
4. Code should be well-documented: As a rule of thumb, there should be at least one comment
line on the average for every three source lines of code.
5. Length of any function should not exceed 10 source lines:
A lengthy function is usually very difficult to understand as it probably has a large
number of variables and carries out many different types of computations. .
For the same reason, lengthy functions are likely to have disproportionately larger
number of bugs.
6. Do not use GO TO statements:
Use of GO TO statements makes unstructured. This makes the program very difficult to
understand, debug, and maintain.
Following two types of reviews are carried out on the code of a module:
o Code inspection.
o Code walkthrough.
o In this technique, a module is taken up for review after the module has been coded,
successfully compiled, and all syntax errors have been eliminated.
o The main objective of code walkthrough is to discover the algorithmic and logical
errors in the code.
Some of these guidelines are following:
The team performing code walkthrough should not be either too big or too small.
it should consist of between three to seven members.
Discussions should focus on discovery of errors and avoid deliberations on how to fix
the discovered errors.
In order to foster co-operation and to avoid the feeling among the engineers that they
are being watched and evaluated in the code walkthrough meetings.
Managers should not attend the walkthrough meetings.
5.3 TESTING
The aim of program testing is to help realiseidentify all defects in a program.
The tester has been shown as a stick icon, who inputs several test data to the system
and observes the outputs produced by it to check if the system fails on some specific
inputs.
Terminologies
Few important terminologies that have been standardised by the IEEE Standard Glossary of
Software Engineering Terminology [IEEE90]:
1. A mistake
o is essentially any programmer action that later shows up as an incorrect result during
program execution.
o For example,during coding a programmer might commit the mistake of not initializing a
certain variable
2. An error
o is the result of a mistake committed by a developer in any of the development
activities.
o One example of an error is a call made to a wrong function.
o The terms error, fault, bug, and defect are considered to be synonyms in the area of
program testing.
3. A failure
o of a program essentially denotes an incorrect behaviour exhibited by the program
during its execution.
o An incorrect behaviour is observed either as an incorrect result produced or as an
inappropriate activity carried out by the program.
o Every failure is caused by some bugs present in the program.
D. Y. Patil Agriculture & Technical University, Talsande Prepared by Prof. S. A. Kumbhar
Subject: Software Engineering UNIT V: Coding & Testing
o examples:
o – The result computed by a program is 0, when the correct result is 10.
o – A program crashes on an input.
o – A robot fails to avoid an obstacle and collides with it.
4. A test case
o is a triplet [I , S, R], where I is the data input to the program under test, S is the state of
the program at which the data is to be input, and R is the result expected to be
produced by the program.
o A n example of a test case is—[input: “abc”, state: edit, result: abc is displayed], which
essentially means that the input abc needs to be applied in the edit mode, and the
expected result is that the string a b c would be displayed.
8. Testability
o of a requirement denotes the extent to which it is possible to determine whether an
implementation of the requirement conforms to it in both functionality and
performance.
Verification versus validation
Sr.no Verification validation
1. process of determining whether the process of determining whether a fully
output of one phase of software developed software conforms to its
development conforms to that of its requirements specification
previous phase
2 objective of verification is to check if the validation is applied to the fully
work products produced after a phase developed and integrated software to
conform to that which was input to the check if it satisfies the customer’s
phase. requirements.
3. primary techniques used for verification system testing can be considered as a
include review, simulation, formal validation step where it is determined
verification, and testing. unit and whether the fully developed code is as per
integration testing can be considered as its requirements specification.
verification steps where it is verified
whether the code is a s per the module
and module interface specifications.
4 Verification does not require execution of Validation requires execution of the
the software software.
5 Verification is carried out during the validation is carried out to check if the
development process to check if right as required by the customer has been
the development activities are proceeding developed.
alright
6 verification is concerned with phase the aim of validation is to check whether
containment of errors the deliverable software is error free.
There are essentially two main approaches to systematically design test cases:
1. Black-box approach
2. White-box (or glass-box) approach
1. Unit testing
2. Integration testing
3. System testing
Unit testing is referred to as testing in the small, whereas integration and system testing are
referred to as testing in the large.
Stub: The role of stub and driver modules is pictorially shown in Figure 5.3.
A stub procedure is a dummy procedure that has the same I/O parameters as the
function called by the unit under test but has a highly simplified
Example, a stub procedure may produce the expected behaviour using a simple table
look up mechanism.
Driver: A driver module should contain the non-local data structures accessed by the
module under test.
it should also have the code to call the different functions of the unit under test with
appropriate parameter values for testing.
Figure 5.3: Unit testing with the help of driver and stub modules
The main idea behind defining equivalence classes of input data is that testing the code
with any one value belonging to an equivalence class is as good as testing the code with
any other value belonging to the same equivalence class.
Equivalence classes for a unit under test can be designed by examining the input data and
output data. The following are two general guidelines for designing the equivalence classes:
1. If the input data values to a system can be specified by a range of values, then one valid
andtwo invalid equivalence classes need to be defined.
For example, if the equivalence class is the set of integers in the range 1 to 10 (i.e., [1,10]), then
the invalid equivalence classes are [−∞,0], [11,+∞].
2. If the input data assumes values from a set of discrete members of some domain, then one
equivalence class for the valid input values and another equivalence class for the invalid input
values should be defined.
For example, if the valid equivalence classes are {A,B,C},then the invalid equivalence class is
, where is the universe of possible input values.
Example 5.6 for software that computes the square root of an input integer that can assume
values in the range of 0 and 5000.
Determine the equivalence classes and the black box test suite.
Answer: There are three equivalence classes—The set of negative integers, the set of integers
in the range of 0 and 5000, and the set of integers larger than 5000. Therefore, the test cases
must include representatives for each of the three equivalence classes.
Example 5.8: Design equivalence class partitioning test suite for a function that reads a
character string of size less than five characters and displays whether it is a palindrome.
Answer: The equivalence classes are the leaf level classes shown in Figure 5.4. The equivalence
classes are palindromes, non-palindromes, and invalid inputs. Now, selecting one
representative value from each equivalence class, we have the required test suite:
{abc,aba,abcdef}.
Example 5.10:Design boundary value test suite for the function described in Example
5.6.
Answer: The equivalence classes have been showed in Figure 5.5. There is a boundary
between the valid and invalid equivalence classes. Thus, the boundary value test suite is
{abcdefg, abcdef}.
Figure 5.5: CFG for (a) sequence, (b) selection, and (c) iteration type of constructs.
1. Fault-based testing
A fault-based testing strategy targets to detect certain types of faults.
These faults that a test strategy focuses on constitute the fault model of the strategy.
An example of a fault-based strategy is mutation testing
2. Coverage-based testing
A coverage-based testing strategy attempts to execute (or cover) certain elements of a
program.
Popular examples of coverage-based testing strategies are statement coverage, branch
coverage, multiple condition coverage, and path coverage-based testing.
When none of two testing strategies fully covers the program elements exercised by
the other, then the two are called complementary testing strategies.
The concepts of stronger, weaker, and complementary testing are schematically
illustrated in Figure 5.6.
Observe in Figure 5.6(a) that testing strategy A is stronger than B since B covers only a
proper subset of elements covered by B.
On the other hand, Figure 5.6(b) shows A and B are complementary testing strategies
since some elements of A are not covered by B and vice versa.
A test suite should, however, be enriched by using various complementary testing
strategies.
Example 5.11 Design statement coverage-based test suite for the following Euclid’s GCD
computation program:
int computeGCD(x,y)
int x,y;
{
1 while (x != y){
2 if (x>y) then
3 x=x-y;
4 else y=y-x;
5}
6 return x;
}
Answer: To design the test cases for the statement coverage, the conditional expression of the
while statement needs to be made true and the conditional expression of the if statement
needs to be made both true and false. By choosing the test set {(x = 3, y = 3), (x = 4, y = 3), (x =
3, y =4)}, all statements of the program would be executed at least once.
Example 5.12 For the program of Example 5.11, determine a test suite to achieve
branch coverage.
Answer: The test suite {(x = 3, y = 3), (x = 3, y = 2), (x = 4, y = 3), (x = 3, y = 4)} achieves
branch coverage.
It is easy to show that branch coverage-based testing is a stronger testing than
statement coverage-based testing.
setWarningLightOn();
The program segment has a bug in the second component condition, it should have
been temperature<50. The test suite {temperature=160, temperature=40}
achievesbranch coverage. But, it is not able to check that setWarningLightOn();
should not be called for temperature values within 150 and 50.
A control flow graph describes the sequence in which the different instructions of a
program get executed.
A CFG is a directed graph consisting of a set of nodes and edges (N, E).
the CFG of the program given in Figure 5.7(a) can be drawn as shown in Figure 5.7(b).
Path
A path through a program is any node and edge sequence from the start node to a
terminal node of the control flow graph of a program.
Path coverage testing does not try to cover all paths, but only a subset of paths called
linearly independent paths ( or basis paths ).
In the program’s control flow graph G, any region enclosed by nodes and edges can be
called as a bounded area.
The number of bounded areas in a CFG increases with the number of decision
statements and loops
CFG example shown in Figure 5.7. From a visual examination of the CFG the number of
bounded areas is 2.
Therefore the cyclomatic complexity, computed with this method is also 2+1=3.
Method 3: The cyclomatic complexity of a program can also be easily computed by computing
the number of decision and loop statements of the program.
If N is the number of decision and loop statements of a program, then the McCabe’s
metric is equal to N + 1.
For the statement S: a=b+c;, DEF(S)={a}, USES(S)={b, c}. The definition of variable X at
statement S is said to be live at statement S1 , if there exists a path from statement S to
statement S1 which does not contain any definition of X .
The definition-use chain (or DU chain) of a variable X is of the form [X, S, S1], where S
and S1 are statement numbers, such that X belongs to DEF(S) and belongs to USES(S1),
and the definition of X in the statement S is live at statement S1 .
One simple data flow testing strategy is to require that every DU chain be covered at
least once.
Data flow testing strategies are especially useful for testing programs containing nested
if and loop statements
A mutant may or may not cause an error in the program. If a mutant does not introduce
any error in the program, then the original program and the mutated program are
called equivalent programs.
A mutated program is tested against the original test suite of the program. If there exists
at least one test case in the test suite for which a mutated program yields an incorrect
result, then the mutant is said to be dead, since the error introduced by the mutation
operator has successfully been detected by the test suite.
Advantage of mutation testing is that it can be automated to a great extent. The
process of generation of mutants can be automated by predefining a set of primitive
changes that can be applied to the program.
These primitive changes can be simple program alterations such as—
o deleting a statement
o deleting a variable definition
o changing the type of an arithmetic operator (e.g., + to -)
o changing a logical operator ( and to or)
o changing the value of a constant
o Changing the data type of a variable, etc.
Mutation-based testing approach is computationally very expensive, since a large
number of possible mutants can be generated.
Mutation testing is not suitable for manual testing. some testing tool that should
automatically generate the mutants and run the test suite automatically on
each mutant.
A program analysis tool usually is an automated tool that takes either the source code
or the executable code of a program as input and produces reports regarding several
important characteristics of the program, such as
o its size, complexity, adequacy of commenting,adherence to programming
standards, adequacy of testing, etc.
program analysis tools are categorised into the following
1. Static analysis tools
2. Dynamic analysis tools
Static program analysis tools assess and compute various characteristics of a program
without executing it.
Static analysis tools analyse the source code to compute certain metrics characterising
the source code (such as size, cyclomatic complexity, etc.) and also report certain
analytical conclusions.
These also check the conformance of the code with the prescribed coding standards. In
this context, it displays the following analysis results:
To what extent the coding standards have been adhered to?
Whether certain programming errors such as uninitialised variables, mismatch between
actual and formal parameters, variables that are declared but never used, etc., exist? A
list of all such errors is displayed.
Example: Code review techniques such as code walkthrough and code inspection,
compiler
Limitation : inability to analyse run-time information such as dynamic memory
references using pointer variables and pointer arithmetic, etc
Integration testing is carried out after all (or at least some of) the modules have been
unit tested.
Objective of integration testing is to detect the errors at the module interfaces (call
parameters).
For example, it is checked that no parameter mismatch occurs when one module
invokes the functionality of another module.
The objective of integration testing is to check whether the different modules of a
program interface with each other properly.
Any one (or a mixture) of the following approaches can be used to develop the test plan:
1. Big-bang approach to integration testing
2. Top-down approach to integration testing
3. Bottom-up approach to integration testing
4. Mixed (also called sandwiched ) approach to integration testing
All the unit tested modules making up a system are integrated in a single step & tested.
Used only for very small systems.
Problem : once a failure has been detected during integration testing, it is very difficult
to localise the error as the error may potentially lie in any of the modules.
Debugging errors reported during big-bang integration testing are very expensive to fix.
Never used for large programs.
o The complexity that occurs when the system is made up of a large number of
small subsystems that are at the same level.
3. Top-down approach to integration testing
Top-down integration testing starts with the root module in the structure chart and
one or two subordinate modules of the root module.
After the top-level ‘skeleton’ has been tested, the modules that are at the immediately
lower layer of the ‘skeleton’ are combined with it and tested.
Requires the use of program stub but does not require any driver routines.
Advantage:
o It requires writing only stubs, and stubs are simpler to write compared to
drivers.
Disadvantage:
o In the absence of lower-level routines, it becomes difficult to exercise the top-
level routines in the desired manner since the lower level routines usually
perform input/output (I/O) operations.
In incremental integration testing, only one new module is added to the partially
integrated system each time.
In phased integration, a group of related modules are added to the partial system each
time.
After all the units of a program have been integrated together and tested, system
testing is taken up.
System tests are designed to validate a fully developed system to assure that it meets
its requirements.
The test cases are designed solely based on the SRS document.
There are essentially three main kinds of system testing depending on who carries out testing:
1. Alpha Testing:
o Alpha testing refers to the system testing carried out by the test team within the
developing organisation.
2. Beta Testing:
o Beta testing is the system testing performed by a select group of friendly
customers.
3. Acceptance Testing:
o Acceptance testing is the system testing performed by the customer to
determine whether to accept the delivery of the system.
1. Stress testing
Stress testing is also known as endurance testing.
Stress testing evaluates system performance when it is stressed for short periods of
time.
Stress tests are black-box tests which are designed to impose a range of abnormal and
even illegal input conditions so as to stress the capabilities of the software.
Input data volume, input data rate, processing time, utilisation of memory, etc., are
tested beyond the designed capacity.
For example, suppose an operating system is supposed to support fifteen concurrent
transactions, then the system is stressed by attempting to initiate fifteen or more
transactions simultaneously.
Important for systems that under normal circumstances operate below their maximum
capacity but may be severely stressed at some peak demand hours
For example, if the corresponding non functional requirement states that the response
time should not be more than twenty secs per transaction when sixty concurrent users
are working, then during stress testing the response time is checked with exactly sixty
users working simultaneously.
2. Volume testing
Volume testing checks whether the data structures (buffers, arrays,queues, stacks,
etc.) have been designed to successfully handle extraordinary situations.
For example, the volume testing for a compiler might be to check whether the symbol
table overflows when a very large program is compiled.
3. Configuration testing
Configuration testing is used to test system behaviour in various hardware and
software configurations specified in the requirements.
Sometimes systems are built to work in different configurations for different users.
For instance, a minimal system might be required to serve a single user, and other
extended configurations may be required to serve additional users during configuration
testing.
The system is configured in each of the required configurations and depending on the
specific customer requirements, it is checked if the system behaves correctly in all
required configurations.
4. Compatibility testing
This type of testing is required when the system interfaces with external systems (e.g.,
databases, servers, etc.).
Compatibility aims to check whether the interfaces with the external systems are
performing as required.
For instance, if the system needs to communicate with a large database system to
retrieve information, compatibility testing is required to test the speed and accuracy of
data retrieval.
5. Regression testing
This type of testing is required when software is maintained to fix some bugs or
enhance functionality, performance, etc.
6. Recovery testing
Tests the response of the system to the presence of faults, or loss of power, devices,
services, data, etc.
The system is subjected to the loss of the mentioned resources (as discussed in the SRS
document) and it is checked if the system recovers satisfactorily.
For example, the printer can be disconnected to check if the system hangs. Or, the
power may be shut down to check the extent of data loss and corruption.
7. Maintenance testing
This addresses testing the diagnostic programs, and other procedures that are required
to help maintenance of the system.
It is verified that the artifacts exist and they perform properly.
8. Documentation testing
It is checked whether the required user manual, maintenance manuals,and technical
manuals exist and are consistent.
If the requirements specify the types of audience for which a specific manual should be
designed, then the manual is checked for compliance of this requirement.
9. Usability testing
Usability testing concerns checking the user interface to see if it meets all user
requirements concerning the user interface.
During usability testing, the display screens, messages, report formats, and other
aspects relating to the user interface requirements are tested.
Functionally correct GUI is not enough. Therefore, the GUI has to be checked against the
checklist.
10. Security testing
Security testing is essential for software that handles or process confidential data that is
to be gurarded against pilfering.
It needs to be tested whether the system is fool-proof from security attacks such as
intrusion by hackers.
Security testing techniques include password cracking, penetration testing, and attacks
on specific ports, etc.