Software Engineering Unit-4
Software Engineering Unit-4
(KCS 601)
Lecture Notes
Software Testing: Testing Objectives, Unit Testing, Integration Testing, Acceptance Testing,
Regression Testing, Testing for Functionality and Testing for Performance, Top-Down and
Bottom-Up Testing Strategies: Test Drivers and Test Stubs, Structural Testing (White Box
Testing), Functional Testing (Black Box Testing), Test Data Suit Preparation, Alpha and Beta
Testing of Products. Static Testing Strategies: Formal Technical Reviews (Peer Reviews),
Walk Through, Code Inspection, Compliance with Design and Coding Standards.
SOFTWARE TESTING –
It is the process of executing a program with the intention of finding errors in the code.
The objective of testing is to show corrections & testing is considered to succeed when an error
is detected.
Bug - In software testing, a bug is the informal name of defects, which means that software or
application is not working as per the requirement. When we have some coding error, it leads a
program to its breakdown, which is known as a bug. The test engineers use the terminology
Bug.
Defect - Defects in a product represent the inefficiency and inability of the application to meet the
criteria and prevent the software from performing the desired work. In other words, we can say
that the bug announced by the programmer and inside the code is called a Defect.
Fault- It is a condition that causes a system to fail in performing its required function. A fault
is the basic reason for software malfunction and is synonymous with the commonly used term
bug.
Tester – A person whose job is to find fault in the product is called Tester.
Test ware – A work product produced by software test engines or tester consisting of checklists,
test plans, test cases, test reports, test procedures etc.
Test Case – A test case is a set of inputs and expected outputs for a program under test,
Debugging – It is a systematic review of program text order to fix bugs in the program.
TEST ORACLES:
To test any program, we need to have a descriptive of its expected behavior and a method of
determining whether the observed behavior conforms to the expected behavior. For this we
need a Test Oracle.
A test oracle is a mechanism, different form the program itself that can be used to check the
correctness of the output of the program for the test cases.
We can consider testing a process in which the test cases are given to the test oracle and the
program under testing the output of the two is then compared to determine if the program
behaved correctly for the test cases.
To help the oracle determine the correct behavior, it is important that the behavior of the system
or component is unambiguously specified and that the specification itself is error free.
Debugging is the process of analyzing and locating bugs when software does not
behave as expected.
Design – Design reviews are more technical. Design must be checked for logic faults,
interface faults, lack of exception handlingand non-conformance to the specified.
Implementation – code modules are informally tested by the programmers while they
are being implemented. Formal testing can include non-execution modules &
execution based methods (Black Box Testing and White Box Testing)
Acceptance Testing – the software is delivered to the client, who tests the software on
the actual hardware, using actual data instead of data. A product cannot be considered
to satisfy its specification until it has passed anacceptance test.
Maintenance – Modified versions of the original product must be tested to ensure that
changes have been correctly implemented.
The objective of testing is to discover the residual design errors before delivery to the
customer.
The failure data during the testing process are taken down in order to estimate the
software reliability.
Bug Characteristics and Bug Types –
Characteristics of Software Bugs –
a) The symptom & the cause of a bug may exist geographically remote from each other.
b) The symptom may be caused by human error that is not easily traced.
c) The symptom may be a result of timing problems, rather than processing problems.
g) The symptom may be due to causes that are distributed across a number of tasks
running on different processors.
TYPES OF ERRORS-
1) Syntax Errors – They are produced by writing wrong syntax. These are generally
caught by compiler.
2) Logic/ Algorithm Errors- These errors occur due to
a) Branching too soon
b) Branching too late
c) Testing the wrong condition
d) Initialization errors
e) Forgetting to test for a particular condition
f) Data type mismatch
g) Incorrect formula or computation
3) Documentation Errors –
These occur due to mismatch between documentation and code.
These errors lead to difficulties especially during maintenance
4) Capacity Errors – These errors are due to system performance degradation at capacity.
5) Timing / Coordination Errors –
These errors are mainly found in real time systems.
These errors deal with process coordination and are very difficult to find and correct.
6) Computation and Precision Error –
These errors are caused due to rounding’s and truncations issues, while dealing with
the real numbers & conversion.
7) Stress / Overload errors – These errors are caused when users / device capacities
exceed.
8) Throughput / Performance Errors – these Errors come across due to throughput or
performance degradation ex – response time degradation.
9) Recovery Errors – These are error handling faults.
10) Standard / Procedures – These don’t cause errors in and of themselves but rather create
an environment where errors are created / introduced as the systems is tested and
modified.
SOFTWARE TESTING STRATEGIES –
Software testing strategy provides a road map for the software developer, Quality
Assurance organization and the customer.
A strategy must provide guidance for the practitioner and a set of milestones for the manager.
Common characteristics of software testing strategies include the following:
a) Testing begins at the module level & works outward toward the integration of the
entire system (Bottom-up) or high-level modules are tested first, and then lower-
level modules are tested (Top-down).
b) Different testing techniques are appropriate at different times.
c) Testing is conducted by the developer and for large projects, by an independent test
group.
d) Testing & debugging are different activities, but debugging must be accommodated
in any testing strategy.
TYPES OF TESTING –
UNIT TESTING –
Unit testing procedures utilizes the white box methods and concentrates on testing
individual programming units.
These units are sometimes referred to as modules or atomic modules and they represent the
smaller programming entity.
Unit testing is essentially a set of path test performance to examine the many different paths
through the modules.
These types of tests are conducted to prove that all paths in the program are solid and without
error and will not cause abnormal termination of the program or undesirable results.
INTEGRATION TESTING –
It focuses on testing multiple modules working together. Two basic types of integration are
usually used:
1) Top-Down Integration –
It starts at the top of the program hierarchy and travels down its branches.
This can be done in either Depth First (Shortest path down to the deepest level) or
breadth first (across the hierarchy) before proceeding to the next level.
The main advantage of this type of integration is the basic skeleton of the program
/ system can be seen and tested early.
Main disadvantage is the use of program stubs until the actual modules are written.
2) Bottom-Up Integration –
This type of integration has the lowest level modules built & tested first on
individual bases and in clusters using test drivers.
This insures each module is fully tested before its utilized by its calling Module.
Main advantage in uncovering errors in critical modules early.
Main disadvantage is the fact that most or many modules must be built before a
working program can be presented.
Integration testing procedure can be performed in four ways:
Top Down Strategy
Bottom up Strategy
Big Bang Strategy
Sandwiched Strategy
Test Drivers and Test Stubs:
Drivers are basically called in BOTTOM UP testing approach. In bottom up testing approach the
bottom level modules are prepared but the top level modules are not prepared. Testing of the
bottom level modules is not possible with the help of main program. So we prepare a dummy
program or driver to call the bottom level modules and perform its testing.
Stubs are basically used in TOP-DOWN approach of integration testing. In this approach, the
upper modules are prepared first and are ready for testing while the bottom modules are not yet
prepared by the developers.
So in order to form the complete application we create dummy programs for the lower modules in
the application so that all the functionalities can be tested.
Top Down Strategy – Top-Down Integration Testing is a method in which integration testing
takes place from top to bottom following the control flow of software system. The higher level
modules are tested first and then lower level modules are tested and integrated in order to check
the software functionality. Stubs are used for testing if some modules are not ready.
Advantages:
o Fault Localization is easier.
o Possibility to obtain an early prototype.
o Critical Modules are tested on priority; major design flaws could be found and fixed
first.
Disadvantages:
o Needs many Stubs.
o Modules at a lower level are tested inadequately.
Bottom Up Strategy – Bottom-up Integration Testing is a strategy in which the lower level
modules are tested first. These tested modules are then further used to facilitate the testing of
higher level modules. The process continues until all modules at top level are tested. Once the
lower level modules are tested and integrated, then the next level of modules are formed.
Advantages:
o Fault localization is easier.
o No time is wasted waiting for all modules to be developed unlike Big-bang approach
Disadvantages:
o Critical modules (at the top level of software architecture) which control the flow of
application are tested last and may be prone to defects.
o An early prototype is not possible
Big Bang Strategy – Big Bang Testing is an Integration testing approach in which all the
components or modules are integrated together at once and then tested as a unit. This
combined set of components is considered as an entity while testing. If all of the components
in the unit are not completed, the integration process will not execute.
Advantages:
o Convenient for small systems.
Disadvantages:
o Fault Localization is difficult.
o Given the sheer number of interfaces that need to be tested in this approach, some
interfaces link to be tested could be missed easily.
o Since the Integration testing can commence only after “all” the modules are designed,
the testing team will have less time for execution in the testing phase.
o Since all modules are tested at once, high-risk critical modules are not isolated and
tested on priority. Peripheral modules which deal with user interfaces are also not
isolated and tested on priority.
Sandwiched Strategy – Sandwich Testing is a strategy in which top level modules are tested
with lower level modules at the same time lower modules are integrated with top modules and
tested as a system. It is a combination of Top-down and Bottom-up approaches therefore it is
called Hybrid Integration Testing. It makes use of both stubs as well as drivers.
FUNCTIONAL TESTING –
In this each function implemented in the module is identified. From this test data
aredevised to test each function separately.
Functional testing verifies that an application does what it is supposed to do and
doesn’tdo what it shouldn’t do.
Functional testing includes testing of all the interfaces and should therefore involve
theclients in the process.
Functional testing includes testing of all the interfaces and should therefore involve
theclients in the process.
Functional testing can be difficult for following reasons :
o Functional within a module may consist of lower level functions. Each of
whichmust be tested first.
o Lower level functions may not be independent.
o Functionality may not coincide with module boundaries this tends to blur
thedistinction between module testing and integration testing.
RETESTING –
Retesting is a process to check specific test cases that are found with bug/s in the final
execution.
Generally, testers find these bugs while testing the software application and assign it to
the developers to fix it.
Then the developers fix the bug/s and assign it back to the testers for verification.
This continuous process is called Retesting.
REGRESSION TESTING –
This testing is the process of running a subset of previously executed integration &
function tests to ensure that program changes have not degraded the system.
The regressive phase concerns the effect of newly introduce changes on all the
previously integrated code.
It may be conducted manually or using automated tools.
Basic approach is to incorporate selected tests case into a regression bucket that is run
periodically to find regression problems.
SYSTEMS TESTING – System testing is types of testing where tester evaluates the whole
system against the specified requirements.
End to End Testing -It involves testing a complete application environment in a
situation that mimics real-world use, such as interacting with a database, using network
communications, or interacting with other hardware, applications, or systems if appropriate.
ACCEPTANCE TESTING –
Acceptance testing is a type of testing where client/business/customer tests the
software with real time business scenarios.
The client accepts the software only when all the features and functionalities work
as expected. This is the last phase of testing, after which the software goes into production.
This is also called User Acceptance Testing (UAT).
Two kinds of Acceptance testing are:
Alpha Testing - This is a form of internal acceptance testing performed mainly by
the in-house software QA and testing teams. Alpha testing is the last testing done by
the test teams at the development site after the acceptance testing and before
releasing the software for beta test. Alpha testing can also be done by potential users
or customers of the application. Still, this is a form of in-house acceptance testing.
Beta Testing - This is a testing stage followed by the internal full alpha test cycle.
This is the final testing phase where companies release the software to a few
external user groups outside the company’s test teams or employees. This initial
software version is known as the beta version. Most companies gather user feedback
in this release.
BLACK BOX TESTING –
It is also known as Functional Testing, Specification Testing, Behavioral Testing,
Data Driven Testing and input / output driven testing.
In functional testing the structure of the program is not considered. Test cases are
decided solely on the basis of requirements or specifications of the program or
module and the internals of the module or the program are not considered for
selection of test cases.
Basis for deciding test cases in functional testing is the requirements or specifications
of the system or module.
Black Box Testing attempts to uncover the following :
a) Incorrect Functions
b) Data structure Errors
c) Missing Functions
d) Performance Errors
e) Initialization & Termination Errors
f) External Databases Access Errors
Guidelines of BVA –
a) For Given range of input values say 10.0 to 20.0 identify valid inputs as ends of range
(10,20) , invalid inputs such as (9.99 & 20.01) and write test cases for the same,
b) For a given number of inputs say (e.g. 5, 7, 8, 10, 15) identify minimum and
maximumvalues as valid (5 & 15) and valid inputs (4 & 16).
NOT
OR
AND
Comparison Testing –
For critical applications requiring fault tolerance a number independent versions of
software are developed for the same specifications.
If o/p from each version is the same then it is assumed that all implementation are correct/
If the o/p is different, each version is examined see if is responsible of the differing output.
It is not fool proof because if the specifications applied to all versions are incorrect,
allversions will likely reflect error and there may be the same output.
Path 1 : a -- b -- d --e
Path 2 : a –b –d – f –n –b –d – e
Path 3 : a—b –c –g –j –k –m –n –b – d –e
Path 4 : a—b—c –g –j –l –m –n –b –d –e
Path 5 : a –b –c –g –h –I –n –b –d –e
Structural Testing –
Condition Testing :
Condition Testing is done to test all logical conditions in a program module.
It differs from branch coverage only when multiple conditions must be evaluated
toreach a decision.
Multi condition coverage requires sufficient test cases to exercise all possible
combinations of conditions in a program decision.
Test cases are designed so that at least once each condition takes on every possible
values.
Eg. if ((x) &&(y) &&(!z))
printf(“valid”);
else
printf(“invalid\n”);
Hence, two valid conditions are:
i) X = T , Y = T , Z = F
ii) X = F, Y = F , Z = T
In multi conditions all conditions are tested.
Loop coverage Testing :
This requires sufficient test cases for all program loops to be executed for 0,1,2 &
many iterations covering initialization typical running & termination conditions.
It is most powerful form of white box testing all paths are tested.
This criteria requires sufficient test cases for each feasible path, basis path etc. from
start to exit of defined program segment to be executed at least once.
c_use – it is also called Computation use and occurs when variables occurs for computation. A
path can be therefore identified starting from the definition and ending at a statement where it
is used for computation called dc path.
P_use – it is similar to c_use except that in the statement the variable appears in the condition.
So path can be identified starting from definition of variable & ending at statement where the
variable is appearing in the predicate called dp- depth.
All_use – In this paths can be identified starting from definition of a variable to its every
possible use.
Du_use – In this a path can be identified starting from definition of a variable & ending at a
point where it is used but its value is not changed called sdu_path.
Logic Based Testing –
Logic Based testing is used when the input domain resulting processing are amenable to a
decision table representation.
TEST SUITE
Test Plan
A test plan is a detailed document which describes software testing areas and activities. It outlines
the test strategy, objectives, test schedule, required resources (human resources, software, and
hardware), test estimation and test deliverables.
Test Case
A test case is a set of instructions determining whether a software or system behaves as expected.
A test case generally outlines the various inputs and outputs for a particular scenario and provides
step-by-step instructions on executing that scenario. It can also include information about the
expected result after executing those steps.
On the Categories menu on the left, The product listing should be updated,
click on Laptops. displaying only laptops.
Click on the first product (Sony VAIO The product’s details page should be
i5) displayed.
Click on the Add to cart button. The window should display an alert
saying “Product added.”
Click on the Cart option on the main The cart page should be displayed,
navigation menu. showing the laptop as the only product
in the cart. It should display the picture
of the laptop, its description (Sony
VAIO i5), its price ($790), and a link
to delete the item. The cart total should
be displayed as $790.
Test Suite
A test suite is a set of tests designed to check the functionality and performance of the software. It
collects individual test cases based on their specific purpose or characteristics.
Adding a single product to empty cart Verify whether the cart correctly handles receiving the
first item
Adding two products to empty cart Verify whether the cart correctly handles receiving
two items
Removing an existing product from Check whether the functionality of removing items
cart works as expected
Adding two instances of same Verify whether adding two instances of the same item
products to cart works as expected
Static Testing is a type of testing technique performed on a software application, where the test
elements are not actually executed or put to use. As the name says, it is opposite to dynamic
testing, and so the application will not be tested while it is running. It involves manual functional
testing processes, based on activities like visually/ physically evaluating the code for functionality
coverage, code is analyzed for errors, unused snippets or variables in the code, coding standards,
etc.
Reviews can vary from informal to formal review.
Informal Reviews
o Informal reviews are applied in the early stages of the life cycle of the document.
o These reviews are conducted between two person team. In later stages more people are
involved.
o The aim of informal reviews is to improve the quality of the document and help the
authors.
o These reviews are not based on the procedure and not documented.
Formal reviews
o Formal reviews follow the formal process i.e. these reviews are well structured and
managed.
o Following are the phases of formal reviews:
i) Planning
Review process starts with planning phase. In planning, the review process starts with a
request for review by the authors to the inspection leader. In the formal review, the
inspection leader executes the entry check and defines the exit criteria. Entry criteria
verify the document is ready to enter the formal review process.
ii) Kick-off
Kick-off meeting is optional in review procedure. The aim of kick-off step is to explain
the objectives of review and distribute the documents in meeting etc.
iii) Preparation
In preparation, reviewers review the document separately using related rules,
procedures, and documents. Every reviewer recognizes the defects, questions and
comments as per their role and understanding of document.
iv) Review meeting
Review meeting includes three phases:
1. Logging Phase - Defects and issues are identified in the preparation step that are
logged page by page.
2. Discussion Phase - This phase handles the issues that require discussion.
3. Decision Phase - Decision on the document reviews is constructed by reviewers or
participants. Sometimes decision is based on formal exit criteria (Average number of
major defects found per page).
v) Rework
If the number of defects found per page is more than certain level then the document
needs to be reworked.
vi) Follow-up
In follow up, moderator ensures that author has taken an action on all known defects.
The distribution of updated document and collection of feedback is completed in follow-
up. In follow up, it is the responsibility of the moderator to ensure that the information is
correct and it stored for future analysis.
During the Review process four types of participants that take part in testing are:
Moderator: Performs entry check, follow up on rework, coaching team member,
schedule the meeting.
Author: Takes responsibility for fixing the defect found and improves the quality of the
document
Scribe: It does the logging of the defect during a review and attends the review meeting
Reviewer: Check material for defects and inspects.
Reader: The person reading through the documents, one item at a time. The other
inspectors then point out defects.
Goals of walkthrough
Make the document available for the stakeholders both outside and inside the software
discipline for collecting the information about the topic under documentation.
Describe and evaluate the content of the document.
Study and discuss the validity of possible alternatives and proposed solutions.
2) Inspection
The trained moderator guides the Inspection. It is most formal type of review.
The reviewers are prepared and check the documents before the meeting.
In Inspection, a separate preparation is achieved when the product is examined and
defects are found. These defects are documented in issue log.
In Inspection, moderator performs a formal follow-up by applying exit criteria.
Goals of Inspection
Assist the author to improve the quality of the document under inspection.
Efficiently and rapidly remove the defects.
Generating the documents with higher level of quality and it helps to improve the product
quality.
It learns from the previous defects found and prohibits the occurrence of similar defects.
Generate common understanding by interchanging information.
3) Technical Review
It is well documented and follows defect detection technique which involves peers and
technical experts.
It is usually led by a trained Moderator and not the Author.
In Technical Review, the product is examined and the defects are found which are mainly
technical ones.
No management participation is there in Technical Review.
The full report is prepared to have a list of issues addressed.