0% found this document useful (0 votes)
58 views

Software Engineering Unit-4

This document provides an overview of software testing concepts. It defines key terms like errors, bugs, defects, and failures. It describes different types of testing like unit testing, integration testing, and acceptance testing. It discusses test strategies like top-down and bottom-up testing. It also covers test data preparation, test oracles, the difference between testing and debugging, and how testing fits into the software development lifecycle. The objectives of software testing and characteristics/types of bugs are also summarized.

Uploaded by

rishali
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views

Software Engineering Unit-4

This document provides an overview of software testing concepts. It defines key terms like errors, bugs, defects, and failures. It describes different types of testing like unit testing, integration testing, and acceptance testing. It discusses test strategies like top-down and bottom-up testing. It also covers test data preparation, test oracles, the difference between testing and debugging, and how testing fits into the software development lifecycle. The objectives of software testing and characteristics/types of bugs are also summarized.

Uploaded by

rishali
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

SOFTWARE ENGINEERING

(KCS 601)

Lecture Notes

UNIT IV – SOFTWARE TESTING


Unit-4 Syllabus

Software Testing: Testing Objectives, Unit Testing, Integration Testing, Acceptance Testing,
Regression Testing, Testing for Functionality and Testing for Performance, Top-Down and
Bottom-Up Testing Strategies: Test Drivers and Test Stubs, Structural Testing (White Box
Testing), Functional Testing (Black Box Testing), Test Data Suit Preparation, Alpha and Beta
Testing of Products. Static Testing Strategies: Formal Technical Reviews (Peer Reviews),
Walk Through, Code Inspection, Compliance with Design and Coding Standards.

SOFTWARE TESTING –
It is the process of executing a program with the intention of finding errors in the code.

It is the process of exercising or evaluating a system or system component by manual or


automatic means to verify that it satisfies specified requirements or to identify difference
between expected & actual results.

The objective of testing is to show corrections & testing is considered to succeed when an error
is detected.

TERMS USED IN TESTING:


Error – An error is a mistake, misconception, or misunderstanding on the part of a software
developer. For example, a developer may misunderstand a design notation or a programmer
might type a variable name incorrectly – leads to an Error.

Bug - In software testing, a bug is the informal name of defects, which means that software or
application is not working as per the requirement. When we have some coding error, it leads a
program to its breakdown, which is known as a bug. The test engineers use the terminology
Bug.

Defect - Defects in a product represent the inefficiency and inability of the application to meet the
criteria and prevent the software from performing the desired work. In other words, we can say
that the bug announced by the programmer and inside the code is called a Defect.

Fault- It is a condition that causes a system to fail in performing its required function. A fault
is the basic reason for software malfunction and is synonymous with the commonly used term
bug.

Failure – It is the inability of a system or computer to perform required function according to


its specification. A software failure occurs if the behavior of the software different from the
specified behavior. Failure may be caused due to functional or reasons.

Tester – A person whose job is to find fault in the product is called Tester.

Test ware – A work product produced by software test engines or tester consisting of checklists,
test plans, test cases, test reports, test procedures etc.

Test Case – A test case is a set of inputs and expected outputs for a program under test,

Debugging – It is a systematic review of program text order to fix bugs in the program.
TEST ORACLES:
To test any program, we need to have a descriptive of its expected behavior and a method of
determining whether the observed behavior conforms to the expected behavior. For this we
need a Test Oracle.

A test oracle is a mechanism, different form the program itself that can be used to check the
correctness of the output of the program for the test cases.

We can consider testing a process in which the test cases are given to the test oracle and the
program under testing the output of the two is then compared to determine if the program
behaved correctly for the test cases.

Test oracles are necessary for testing.

To help the oracle determine the correct behavior, it is important that the behavior of the system
or component is unambiguously specified and that the specification itself is error free.

Software Testing vs Debugging -


Software testing is the process of executing software in a controlled manner, in order to
answer the question:

 Does the software behave as specified?

 Software testing is used in association with the verification and validation

 Debugging is the process of analyzing and locating bugs when software does not
behave as expected.

 Debugging is an activity that supports testing, but cannot replace testing.


TESTING AND THE SOFTWARE LIFECYCLE –
 Testing should be thought of as an integral part of the software process and an activity
that must be carried out throughout the life cycle.
 Testing and fixing can be done at any stage in the life cycle. However, the cost of
finding and fixing errors increases dramatically as development progresses.

 Types of testing required during several phases of software life cycle.

 Requirements – Requirements must be reviewed with the clients, rapid prototyping


can refine requirements & accommodate changing requirements,

 Specifications- Specifications document must be checked for feasibility,


traceability, completeness and absence of contradictions and ambiguities.
Specification reviews are especially effective.

 Design – Design reviews are more technical. Design must be checked for logic faults,
interface faults, lack of exception handlingand non-conformance to the specified.

 Implementation – code modules are informally tested by the programmers while they
are being implemented. Formal testing can include non-execution modules &
execution based methods (Black Box Testing and White Box Testing)

 Integration – integration Testing is performed to ensure that the modules combine


together correcting to achieve a product that meets its specifications. Appropriate
order of combination must be determined as top design and Bottom up.

 Product Testing – Functionality of the product as a whole is checked against its


specifications. Test cases are derived directly from the specifications document. The
product is also testedfor robustness.

 Acceptance Testing – the software is delivered to the client, who tests the software on
the actual hardware, using actual data instead of data. A product cannot be considered
to satisfy its specification until it has passed anacceptance test.

 Maintenance – Modified versions of the original product must be tested to ensure that
changes have been correctly implemented.

 Software Process Management – The SPM plan must undergo scrutiny , it is


especially important that
Objectives of Software Testing –
Software testing is usually performed for the following objectives :

 Software Quality Improvement

 Verification and Validation

 Software reliability Estimation

i. Software Quality Improvement –


 Software Quality means the conformance of the specified software design
requirements.
 Debugging is performed heavily to find out design defects by the programmer.
 Finding the problems and get them fixed is the purpose of debugging in programming
phase.

ii. Verification and Validation -


 Testing can serve as metrics. It is heavily used as tool the V & V process.
 Software Quality cannot be tested directly but the related factors to make Quality
visiblecan be tested.
 Quality has 3 sets of factors
a) Functionality
b) Engineering and
c) Adaptability
 Tests with the purpose of validating the product works are named clean test, or positive
test.
 The drawbacks are that it can only validate that the software works for the specified test
cases.
 A testable design is a design that can be easily validated falsified and maintained.
 Because testing is a rigorous effort and requires significant time and cost, design for
testability is also an important design rule for software development.

iii. Software Reliability Estimation –


 Software reliability has important relations with many aspects of software including
the structure , and the amount of testing that has been subjected too

 The objective of testing is to discover the residual design errors before delivery to the
customer.

 The failure data during the testing process are taken down in order to estimate the
software reliability.
Bug Characteristics and Bug Types –
Characteristics of Software Bugs –

a) The symptom & the cause of a bug may exist geographically remote from each other.

b) The symptom may be caused by human error that is not easily traced.

c) The symptom may be a result of timing problems, rather than processing problems.

d) The symptom may disappear when another error is corrected.

e) The symptom may actually be caused by non-errors.

f) It may be difficult to accurately reproduce input conditions

g) The symptom may be due to causes that are distributed across a number of tasks
running on different processors.

TYPES OF ERRORS-
1) Syntax Errors – They are produced by writing wrong syntax. These are generally
caught by compiler.
2) Logic/ Algorithm Errors- These errors occur due to
a) Branching too soon
b) Branching too late
c) Testing the wrong condition
d) Initialization errors
e) Forgetting to test for a particular condition
f) Data type mismatch
g) Incorrect formula or computation
3) Documentation Errors –
 These occur due to mismatch between documentation and code.
 These errors lead to difficulties especially during maintenance
4) Capacity Errors – These errors are due to system performance degradation at capacity.
5) Timing / Coordination Errors –
 These errors are mainly found in real time systems.
 These errors deal with process coordination and are very difficult to find and correct.
6) Computation and Precision Error –
 These errors are caused due to rounding’s and truncations issues, while dealing with
the real numbers & conversion.
7) Stress / Overload errors – These errors are caused when users / device capacities
exceed.
8) Throughput / Performance Errors – these Errors come across due to throughput or
performance degradation ex – response time degradation.
9) Recovery Errors – These are error handling faults.
10) Standard / Procedures – These don’t cause errors in and of themselves but rather create
an environment where errors are created / introduced as the systems is tested and
modified.
SOFTWARE TESTING STRATEGIES –

Software testing strategy provides a road map for the software developer, Quality
Assurance organization and the customer.
A strategy must provide guidance for the practitioner and a set of milestones for the manager.
Common characteristics of software testing strategies include the following:
a) Testing begins at the module level & works outward toward the integration of the
entire system (Bottom-up) or high-level modules are tested first, and then lower-
level modules are tested (Top-down).
b) Different testing techniques are appropriate at different times.
c) Testing is conducted by the developer and for large projects, by an independent test
group.
d) Testing & debugging are different activities, but debugging must be accommodated
in any testing strategy.

TYPES OF TESTING –

UNIT TESTING –

 Unit testing procedures utilizes the white box methods and concentrates on testing
individual programming units.
 These units are sometimes referred to as modules or atomic modules and they represent the
smaller programming entity.
 Unit testing is essentially a set of path test performance to examine the many different paths
through the modules.
 These types of tests are conducted to prove that all paths in the program are solid and without
error and will not cause abnormal termination of the program or undesirable results.

INTEGRATION TESTING –
It focuses on testing multiple modules working together. Two basic types of integration are
usually used:
1) Top-Down Integration –
 It starts at the top of the program hierarchy and travels down its branches.
 This can be done in either Depth First (Shortest path down to the deepest level) or
breadth first (across the hierarchy) before proceeding to the next level.
 The main advantage of this type of integration is the basic skeleton of the program
/ system can be seen and tested early.
 Main disadvantage is the use of program stubs until the actual modules are written.
2) Bottom-Up Integration –
 This type of integration has the lowest level modules built & tested first on
individual bases and in clusters using test drivers.
 This insures each module is fully tested before its utilized by its calling Module.
 Main advantage in uncovering errors in critical modules early.
 Main disadvantage is the fact that most or many modules must be built before a
working program can be presented.
Integration testing procedure can be performed in four ways:
 Top Down Strategy
 Bottom up Strategy
 Big Bang Strategy
 Sandwiched Strategy
Test Drivers and Test Stubs:
Drivers are basically called in BOTTOM UP testing approach. In bottom up testing approach the
bottom level modules are prepared but the top level modules are not prepared. Testing of the
bottom level modules is not possible with the help of main program. So we prepare a dummy
program or driver to call the bottom level modules and perform its testing.

Stubs are basically used in TOP-DOWN approach of integration testing. In this approach, the
upper modules are prepared first and are ready for testing while the bottom modules are not yet
prepared by the developers.
So in order to form the complete application we create dummy programs for the lower modules in
the application so that all the functionalities can be tested.

Top Down Strategy – Top-Down Integration Testing is a method in which integration testing
takes place from top to bottom following the control flow of software system. The higher level
modules are tested first and then lower level modules are tested and integrated in order to check
the software functionality. Stubs are used for testing if some modules are not ready.
Advantages:
o Fault Localization is easier.
o Possibility to obtain an early prototype.
o Critical Modules are tested on priority; major design flaws could be found and fixed
first.
Disadvantages:
o Needs many Stubs.
o Modules at a lower level are tested inadequately.

Bottom Up Strategy – Bottom-up Integration Testing is a strategy in which the lower level
modules are tested first. These tested modules are then further used to facilitate the testing of
higher level modules. The process continues until all modules at top level are tested. Once the
lower level modules are tested and integrated, then the next level of modules are formed.
Advantages:
o Fault localization is easier.
o No time is wasted waiting for all modules to be developed unlike Big-bang approach
Disadvantages:
o Critical modules (at the top level of software architecture) which control the flow of
application are tested last and may be prone to defects.
o An early prototype is not possible

Big Bang Strategy – Big Bang Testing is an Integration testing approach in which all the
components or modules are integrated together at once and then tested as a unit. This
combined set of components is considered as an entity while testing. If all of the components
in the unit are not completed, the integration process will not execute.
Advantages:
o Convenient for small systems.
Disadvantages:
o Fault Localization is difficult.
o Given the sheer number of interfaces that need to be tested in this approach, some
interfaces link to be tested could be missed easily.
o Since the Integration testing can commence only after “all” the modules are designed,
the testing team will have less time for execution in the testing phase.
o Since all modules are tested at once, high-risk critical modules are not isolated and
tested on priority. Peripheral modules which deal with user interfaces are also not
isolated and tested on priority.

Sandwiched Strategy – Sandwich Testing is a strategy in which top level modules are tested
with lower level modules at the same time lower modules are integrated with top modules and
tested as a system. It is a combination of Top-down and Bottom-up approaches therefore it is
called Hybrid Integration Testing. It makes use of both stubs as well as drivers.

FUNCTIONAL TESTING –
 In this each function implemented in the module is identified. From this test data
aredevised to test each function separately.
 Functional testing verifies that an application does what it is supposed to do and
doesn’tdo what it shouldn’t do.
 Functional testing includes testing of all the interfaces and should therefore involve
theclients in the process.
 Functional testing includes testing of all the interfaces and should therefore involve
theclients in the process.
 Functional testing can be difficult for following reasons :
o Functional within a module may consist of lower level functions. Each of
whichmust be tested first.
o Lower level functions may not be independent.
o Functionality may not coincide with module boundaries this tends to blur
thedistinction between module testing and integration testing.
RETESTING –
 Retesting is a process to check specific test cases that are found with bug/s in the final
execution.
 Generally, testers find these bugs while testing the software application and assign it to
the developers to fix it.
 Then the developers fix the bug/s and assign it back to the testers for verification.
 This continuous process is called Retesting.

REGRESSION TESTING –
 This testing is the process of running a subset of previously executed integration &
function tests to ensure that program changes have not degraded the system.
 The regressive phase concerns the effect of newly introduce changes on all the
previously integrated code.
 It may be conducted manually or using automated tools.
 Basic approach is to incorporate selected tests case into a regression bucket that is run
periodically to find regression problems.

SYSTEMS TESTING – System testing is types of testing where tester evaluates the whole
system against the specified requirements.
End to End Testing -It involves testing a complete application environment in a
situation that mimics real-world use, such as interacting with a database, using network
communications, or interacting with other hardware, applications, or systems if appropriate.

ACCEPTANCE TESTING –
Acceptance testing is a type of testing where client/business/customer tests the
software with real time business scenarios.
The client accepts the software only when all the features and functionalities work
as expected. This is the last phase of testing, after which the software goes into production.
This is also called User Acceptance Testing (UAT).
Two kinds of Acceptance testing are:
 Alpha Testing - This is a form of internal acceptance testing performed mainly by
the in-house software QA and testing teams. Alpha testing is the last testing done by
the test teams at the development site after the acceptance testing and before
releasing the software for beta test. Alpha testing can also be done by potential users
or customers of the application. Still, this is a form of in-house acceptance testing.
 Beta Testing - This is a testing stage followed by the internal full alpha test cycle.
This is the final testing phase where companies release the software to a few
external user groups outside the company’s test teams or employees. This initial
software version is known as the beta version. Most companies gather user feedback
in this release.
BLACK BOX TESTING –
 It is also known as Functional Testing, Specification Testing, Behavioral Testing,
Data Driven Testing and input / output driven testing.
 In functional testing the structure of the program is not considered. Test cases are
decided solely on the basis of requirements or specifications of the program or
module and the internals of the module or the program are not considered for
selection of test cases.
 Basis for deciding test cases in functional testing is the requirements or specifications
of the system or module.
 Black Box Testing attempts to uncover the following :
a) Incorrect Functions
b) Data structure Errors
c) Missing Functions
d) Performance Errors
e) Initialization & Termination Errors
f) External Databases Access Errors

Advantages of Black Box Testing:


1) The test is unbiased because the designer and the tester are independent of each other.
2) The tester does not need knowledge of any specific programming language,
3) The test is done from the point of view of the user nit the designer.
4) Test cases can be designed as soon as the specific are complete.

Disadvantages of Black Box Testing –


1) The test can be redundant if the software designer has already run a test case.
2) The test cases are difficult to design.

BLACK BOX TESTING TECHINIQUES –


a) Positive and Negative Functional Testing
b) Equivalence Class Partitioning
c) Boundary value Analysis
d) Cause Effect Graphs
e) Comparison Testing

Positive Functional Testing:


This testing entails exercising the application’s functions with valid input and
verifying that the output is correct.
Negative Functional Testing:
This testing involves exercising application for using a combination of invalid
inputs, unexpected conditions and other “out of bounds” scenarios.
Equivalence Class Partitioning:
The idea is to partition the input domain of the system into a finite number of equivalence
classes such that each number of the class would behave in similar fashion.
This technique increases the efficiency of software testing as number of input states is
drastically reduced. This technique involves 2 steps.
a) Identification of equivalence classes
b) Generating the test cases
Identification of Equivalence Classes -
Following guidelines are used-
a) Partition any input domain into minimum two sets of valid values and invalid values.
b) If arrange of valid values is specified as input – select one valid input within range
and two invalid inputs outside at each end of range ex – if required height is (160 cm
– 170 cm) then for testing one valid input can be 165 and two invalid inputs can be
159 cm and 171 cm.
c) If a set of defined values are specified as inputs, define one valid input from within
the set and one invalid output outside the set.
Ex – if degree requires B.E., M.E. or Ph.D. then valid input is M.E. and invalid is MBA
d) If a specific number (N) of valid values or enumeration of values (i.e. 10 15 16 18
20) is specified as input space valid input – 16, invalid input (8 & 22).
e) If a mandatory value is defined in the input space say input must start with $ , define
one valid input , where first character is $ & one without $

ii) Generating the test cases –


a) To each valid & invalid class of input assign a unique identification number,
b) Write test cases covering all valid class of inputs.
c) Write test cases covering all invalid class of inputs such that no test case contains
more than one invalid inputs so as to

Boundary Value Analysis –


It has been observed that boundaries are a very good place for errors to occur. Hence if test
cases are designed for the boundary values of input domain efficiency of testing increases,
thereby increasing the probability of detecting errors.

Guidelines of BVA –
a) For Given range of input values say 10.0 to 20.0 identify valid inputs as ends of range
(10,20) , invalid inputs such as (9.99 & 20.01) and write test cases for the same,
b) For a given number of inputs say (e.g. 5, 7, 8, 10, 15) identify minimum and
maximumvalues as valid (5 & 15) and valid inputs (4 & 16).

Cause Effect Graphs –


 It establishes relationships between logical input combinations called causes and
corresponding actions called effect.
 The Causes & effects are represented using a Boolean Graph. Left hand column of the
figure gives the various logical associations among causes and efforts.
 The right hand columns indicate potentials constraining associations that might apply
to other causes or effects.

Guidelines for Cause Effect Graph –


- Causes and effects are listed for modules and an identifier is assigned to each.
- Cause effect graph is developed.
- The graph is converted to a decision table.
- Decision table’s rules are converted to test cases.
Symbols –
IDENTITY -

NOT

OR

AND

Comparison Testing –
 For critical applications requiring fault tolerance a number independent versions of
software are developed for the same specifications.
 If o/p from each version is the same then it is assumed that all implementation are correct/
 If the o/p is different, each version is examined see if is responsible of the differing output.
 It is not fool proof because if the specifications applied to all versions are incorrect,
allversions will likely reflect error and there may be the same output.

WHITE BOX TESTING –


 It is also known as Glass box Testing, Structural Testing, Clear Box Testing, Open Box
testing, Logic Driven Testing and Path Oriented Testing.
 White Box testing is used to test internals of the program. This is done by examining
the program structure and by deriving test cases from the program logic.
 Test cases are derived to ensure that
a) All independent paths in the program are executed at least once.
b) All logical decisions are tested i.e all possible combinations of true or false are tested.
c) All loops are tested.
d) All internal data structures are tested for their validity.

Advantages of White Box Testing –


a) Forces test developer to reason carefully about implementation.
b) Approximates the partitioning done by execution equivalence
c) Revels errors in hidden code.
d) Beneficial side effects.
e) Optimizations.

Disadvantages of White Box Testing –


a) Expensive
b) Miss cases omitted in the code.
Types of White Box Testing –
a) Basis Path Testing
b) Structural Testing
c) Logic based Testing
d) Fault Based Testing

Basis Path Tetsing –


- It allows the design & definition of a basis set of execution paths
- The test cases created from the basis set allow the program to be executed in such a way as
to examine each possible path through the program by executing each statement at least once.
Following steps are followed –
a) Construction of flow graph from Source Code or flow charts.
b) Identification of independent paths.
c) Computation of cyclomatic complexity.
d) Test case design.
Construction of flow graph – A Flow graph consists of a number of nodes represents as circles
connected by directed arcs.
Computation of Cyclomatic Complexity –
Cyclomatic Complexity is a metric to measure logical complexity of the program. This value
defines the number of independent paths in the program to be executed inorder to ensure that all
statements in the program are executed at least once.
It gives us the value for maximum number of test cases to be designed.
C(g) = E – N +2

E = Number of edges in the flow graph


N = Number of nodes
ii) Identification of independent paths –
 Independent path in a program’s a path consisting of at least one new condition or set
if processing statements.
 In case of a flow graph it must consist of a least one new edge which is not traversed or
included in other paths.
 Number of independent path is given by value of cyclomatic complexity.
Here C(g) = E – N + 2P = 15-12 +2 = 5

Number of independent path

Path 1 : a -- b -- d --e
Path 2 : a –b –d – f –n –b –d – e
Path 3 : a—b –c –g –j –k –m –n –b – d –e
Path 4 : a—b—c –g –j –l –m –n –b –d –e
Path 5 : a –b –c –g –h –I –n –b –d –e

Design of test cases :


 Test cases can now be designed for execution of independent paths as identified.
 This ensures that all statements are executed at least once.

Structural Testing –

 It examines source code and analyses what is present the code.


 Structural testing techniques are often dynamic, meaning that code is executed during
analysis.
 This implies a high test cost due to compilation or interpretation, , linkage , file
management and execution time.
 Structural testing cannot expose errors of code omission but cannot estimate the test
suite adequacy in terms of code coverage , that is execution of components by the test
suite or its fault finding ability.
 Following are some important types of structural testing –
a) Statement Coverage Testing
b) Branch Coverage Testing
c) Condition Coverage Testing
d) Loop Coverage Testing
e) Path Coverage testing
f) Domain & Boundary Testing
g) Data flow Testing

Statement Coverage Testing –


- In this a series of test cases are run such that each statement is executed at least once.
- A weakness of this approach is that there is no guarantee that all outcomes of branches
are properly tested.
- Ex – if (x > 50 && y <10)
z=x+y
printf(“%d\n”,z);
x = x+1;
 In this test case the value of x = 60 & y = 5 are sufficient to execute all the statements.
 Main disadvantage of statement coverage is that it does not handle control structures
fully.
 It does not report whether loops reach their termination condition or not and is
insensitive to the logical operators.
 It only answers that all statements are executed at least once.

Branch Coverage Testing :


 In this a series of tests are run to ensure that all branches are tested at least once.
 It is also called decision coverage.
 Techniques such as statement or branch coverage are called structural tests.
 It requires sufficient test cases for each program decision or branch to be executed so
that each possible outcome occurs at least once.
if (x < 20) AND (y > 50)
test = total + x;
else
total = total +y;
 This can be tested using test cases such as (x <20) AND (y >50)
 Disadvantage - this may ignore branches within a Boolean Expression.
Ex – if (x && (y || add_digit()))
printf(“success \n”);
else
printf(“failure\n”);
 Now, Branch coverage would completely exercise the control structure without
calling the function add_digit();
 Expression is true when x & y are true and false if X is false.

Condition Testing :
 Condition Testing is done to test all logical conditions in a program module.
 It differs from branch coverage only when multiple conditions must be evaluated
toreach a decision.
 Multi condition coverage requires sufficient test cases to exercise all possible
combinations of conditions in a program decision.
 Test cases are designed so that at least once each condition takes on every possible
values.
Eg. if ((x) &&(y) &&(!z))
printf(“valid”);
else
printf(“invalid\n”);
Hence, two valid conditions are:
i) X = T , Y = T , Z = F
ii) X = F, Y = F , Z = T
In multi conditions all conditions are tested.
Loop coverage Testing :

This requires sufficient test cases for all program loops to be executed for 0,1,2 &
many iterations covering initialization typical running & termination conditions.

Path Coverage Testing :

 It is most powerful form of white box testing all paths are tested.
 This criteria requires sufficient test cases for each feasible path, basis path etc. from
start to exit of defined program segment to be executed at least once.

Domain & Boundary Testing :

 It is a form of path coverage


 Path domains are a subset of the input that causes execution of unique paths.
 Input data can be derived from the program control graph. Test inputs are chosen to
exercise each path & also the boundaries of each domain.

Data Flow Testing:


 Data flow testing focus on the points at which variables receives values and the points
at points at which the values are used.
 This technique requires sufficient test cases for each feasible data flow to be executed
at least once.
 Data flow analysis studies the sequence of actions & variables along program paths.
 Terminology used during data flow testing is –
 Def- Statement in the program where an initial value is assigned to a variable eg.
i = 1, sum = 0
Basis Block – it is set of consecutive statements that can be executed without branching
Eg. sum = sum +next;
bill value = bill value + sum;
next++;

c_use – it is also called Computation use and occurs when variables occurs for computation. A
path can be therefore identified starting from the definition and ending at a statement where it
is used for computation called dc path.
P_use – it is similar to c_use except that in the statement the variable appears in the condition.
So path can be identified starting from definition of variable & ending at statement where the
variable is appearing in the predicate called dp- depth.
All_use – In this paths can be identified starting from definition of a variable to its every
possible use.
Du_use – In this a path can be identified starting from definition of a variable & ending at a
point where it is used but its value is not changed called sdu_path.
Logic Based Testing –

Logic Based testing is used when the input domain resulting processing are amenable to a
decision table representation.

Fault Based Testing-


 It attempts to show the absence of certain classes of faults in code.
 Code is analyzed for uninitialized or in referenced variable parameter type checking
etc.
INTERFACE TESTING :
 It is intended to discover defects in the interfaces of objects or modules.
 Interface defects may arise because of errors made in reading the specifications ,
specification misunderstandings or errors or invalid timing assumption.
 It is useful for object oriented software development.
 Types of interface errors may include: parameter interfaces errors, shared memory
errors, message passing interface errors, procedural interface errors etc.

TEST SUITE

Test Plan
A test plan is a detailed document which describes software testing areas and activities. It outlines
the test strategy, objectives, test schedule, required resources (human resources, software, and
hardware), test estimation and test deliverables.

Test Case
A test case is a set of instructions determining whether a software or system behaves as expected.
A test case generally outlines the various inputs and outputs for a particular scenario and provides
step-by-step instructions on executing that scenario. It can also include information about the
expected result after executing those steps.

Test Step Expected Outcome Obtained


Outcome

Access the URL https://demoblaze.com The “Demo Blaze” site should be


displayed on the browser.

On the Categories menu on the left, The product listing should be updated,
click on Laptops. displaying only laptops.

Click on the first product (Sony VAIO The product’s details page should be
i5) displayed.
Click on the Add to cart button. The window should display an alert
saying “Product added.”

Click on the Cart option on the main The cart page should be displayed,
navigation menu. showing the laptop as the only product
in the cart. It should display the picture
of the laptop, its description (Sony
VAIO i5), its price ($790), and a link
to delete the item. The cart total should
be displayed as $790.

Example of a Test Case

Test Suite
A test suite is a set of tests designed to check the functionality and performance of the software. It
collects individual test cases based on their specific purpose or characteristics.

Importance of Test Suite


As a test suite is a collection of test cases grouped according to a specific set of criteria, we must
learn the major importance of these test suits. By organizing test cases into test suites, testers can
identify and prioritize the most critical tests, ensuring that the most important aspects of the
software are tested first. This helps reduce the risk of missed errors or defects during testing.

TEST SUITE: SHOPPING CART

TEST CASES GOAL

Adding a single product to empty cart Verify whether the cart correctly handles receiving the
first item

Adding two products to empty cart Verify whether the cart correctly handles receiving
two items

Removing an existing product from Check whether the functionality of removing items
cart works as expected

Adding two instances of same Verify whether adding two instances of the same item
products to cart works as expected

Example of a Test Suite


STATIC TESTING METHODS

Static Testing is a type of testing technique performed on a software application, where the test
elements are not actually executed or put to use. As the name says, it is opposite to dynamic
testing, and so the application will not be tested while it is running. It involves manual functional
testing processes, based on activities like visually/ physically evaluating the code for functionality
coverage, code is analyzed for errors, unused snippets or variables in the code, coding standards,
etc.
Reviews can vary from informal to formal review.
Informal Reviews
o Informal reviews are applied in the early stages of the life cycle of the document.
o These reviews are conducted between two person team. In later stages more people are
involved.
o The aim of informal reviews is to improve the quality of the document and help the
authors.
o These reviews are not based on the procedure and not documented.

Formal reviews
o Formal reviews follow the formal process i.e. these reviews are well structured and
managed.
o Following are the phases of formal reviews:
i) Planning
Review process starts with planning phase. In planning, the review process starts with a
request for review by the authors to the inspection leader. In the formal review, the
inspection leader executes the entry check and defines the exit criteria. Entry criteria
verify the document is ready to enter the formal review process.
ii) Kick-off
Kick-off meeting is optional in review procedure. The aim of kick-off step is to explain
the objectives of review and distribute the documents in meeting etc.
iii) Preparation
In preparation, reviewers review the document separately using related rules,
procedures, and documents. Every reviewer recognizes the defects, questions and
comments as per their role and understanding of document.
iv) Review meeting
Review meeting includes three phases:
1. Logging Phase - Defects and issues are identified in the preparation step that are
logged page by page.
2. Discussion Phase - This phase handles the issues that require discussion.
3. Decision Phase - Decision on the document reviews is constructed by reviewers or
participants. Sometimes decision is based on formal exit criteria (Average number of
major defects found per page).
v) Rework
If the number of defects found per page is more than certain level then the document
needs to be reworked.
vi) Follow-up
In follow up, moderator ensures that author has taken an action on all known defects.
The distribution of updated document and collection of feedback is completed in follow-
up. In follow up, it is the responsibility of the moderator to ensure that the information is
correct and it stored for future analysis.
During the Review process four types of participants that take part in testing are:
 Moderator: Performs entry check, follow up on rework, coaching team member,
schedule the meeting.
 Author: Takes responsibility for fixing the defect found and improves the quality of the
document
 Scribe: It does the logging of the defect during a review and attends the review meeting
 Reviewer: Check material for defects and inspects.
 Reader: The person reading through the documents, one item at a time. The other
inspectors then point out defects.

Main types of review are as follows:


1) Walkthrough
 In walkthrough, author guides the review team via the document to fulfil the common
understanding and collecting the feedback.
 Walkthrough is not a formal process.
 In walkthrough, a review team does not require to do detailed study before meeting while
authors are already in the scope of preparation.
 Walkthrough is useful for higher-level documents i.e requirement specification and
architectural documents.

Goals of walkthrough
 Make the document available for the stakeholders both outside and inside the software
discipline for collecting the information about the topic under documentation.
 Describe and evaluate the content of the document.
 Study and discuss the validity of possible alternatives and proposed solutions.

2) Inspection
 The trained moderator guides the Inspection. It is most formal type of review.
 The reviewers are prepared and check the documents before the meeting.
 In Inspection, a separate preparation is achieved when the product is examined and
defects are found. These defects are documented in issue log.
 In Inspection, moderator performs a formal follow-up by applying exit criteria.

Goals of Inspection
 Assist the author to improve the quality of the document under inspection.
 Efficiently and rapidly remove the defects.
 Generating the documents with higher level of quality and it helps to improve the product
quality.
 It learns from the previous defects found and prohibits the occurrence of similar defects.
 Generate common understanding by interchanging information.

3) Technical Review
 It is well documented and follows defect detection technique which involves peers and
technical experts.
 It is usually led by a trained Moderator and not the Author.
 In Technical Review, the product is examined and the defects are found which are mainly
technical ones.
 No management participation is there in Technical Review.
 The full report is prepared to have a list of issues addressed.

Goals of technical review


 The goal is to evaluate the value of technical concept in the project environment.
 Build the consistency in the use and representation of the technical concepts.
 In early stages it ensures that the technical concepts are used correctly.
 Notify the participants regarding the technical content of the document.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy