Black Box Testing: Engr. Anees Ur Rahman

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 25

Software Verification and Validation

Black Box Testing


Lecture 03

Engr. Anees ur Rahman


Department of Software Engineering
Equivalence Class Partitioning Testing
 Equivalence Partitioning is a black-box testing method that divides the input domain of a
program into classes of data from which test cases can be derived

 An ideal test case single handedly uncovers a class of errors e.g incorrect processing of all
character data that might otherwise require many cases to be executed before the general
error is observed.

 Equivalence Partitioning strives to define the test case that uncovers classes of errors, r there by
educing the total number of test cases that must be developed.

 An equivalence class represents a set of valid or invalid states for input conditions
.
Equivalence Class Partitioning Testing
 Equivalence classes can be define according to the following guideline
s;
 If an input condition specifies a range one valid and two invalid
equivalence classes are defined.

 If an input condition specifies a specific value one valid and eq two invalid
uivalence classes are defined.

 If an input condition specifies a member of a set one valid and one invalid
equivalence class are defined.

 If an input condition is Boolean, one valid and one invalid class are defined.
Boundary Valu Testing
e
 (Specific case of Equivalence Class Partitioning Testin
g)
 Boundary value analysis leads to a selection of test c
ases that exercise bounding values. This technique is deve
loped because a great number of errors tend to occur at t
he boundary of input domain rather than at the center.

 Tests program response to extreme input or output v


alues in each equivalence class.

 Guideline for BVA are following


;
 If an input condition specifies a range bounded by values
a and b, test cases should be designed with values a and b
and just above and below a and b.
Fuzzy Testing
 Fuzz testing or fuzzing is a software testing technique, often automated or
semi-automated, that involves providing invalid or unexpected data to the
inputs of a computer program. The program is then monitored for exceptions
or failing built-in code assertions or for finding
potential memory leaks.
Null Case Testing
 Null Testing:
 Exposes defects triggered by no data or missing data.
 Often triggers defects because developers create programs to act upon data, they
don’t think of the case where the project may not contain specific data types

 Example: X, Y coordinate missing for drawing various shapes in Graphics edito


r.
 Example: Blank file names
Comparison Testing
 There are some situations in which the reliability of software is critical. In such
applications redundant hardware and software are often used to minimize the possibility of
error

 When redundant software is developed, separate software engineering teams develop


independent versions of an application using the same specificatio
n

 In such situations each version can be tested with the same test data to ensure that all provide
identical output

 Then all versions are executed in parallel with real time comparison of results to ensure c
onsistency

 These independent versions form the basis of black box testing technique called comparison t
esting or back-to-back testing
Comparison Testing
 If the output from each version is the same, it is a
ssumed that all implementations are correct

 If the output is different, each of the application is


investigated to determine if the defect in one or more
versions is responsible for the difference

 Comparison testing is not fool proof , if the


specification from whic all version have bee
h versions will
developed is in error, all s likely reflect then
error

 In addition if each of the independent versions


produces identical but incorrect results, comparison
testing will fail to detect the error
End to End Testing
 End-to-end testing is a methodology used to test whether the flow
of an application is performing as designed from start to finish.

 End-to-end testing involve ensuring that that integrated


s
components of an application function as expected.

 This is basically exercising an entire “workflow”. Although System


Testing is similar, in System Testing you do not have to complete
the entire workflow but may exercise individual interfaces one at
a time. [14]

 For example, a simplifie end-to-end testing of an email


d
application might involve logging in to the application, getting int
the inbox, oopening and closing the mail box, composing
, and
forwarding or replying to email, checking in the sent items
logging out of the application.
End to End Testing
 Unlike System Testing, End-to-End Testing not only validates the software system under test
but also checks it’s integration with external interfaces. Hence, the name “End-to-End”.

 End to End Testing is usually executed after functional and system testing. It uses actual
production like data and test environmen to simulate real-time settings. End-to-En testing
is also called Chain Testing t d
Error Handling Testing
 Error-handling testing determines the ability of the application system to properly process
incorrect transactions.

 Specific objectives of error-handling testing include:


 Determine that all reasonably expected error conditions are recognizable by the application system.
 Determine that that the procedures provide a high probability that the error will be properly corrected.

 Error-handling Test examples:


 Produce a representative set of transactions containing errors and enter them into the system
to determine whether the application can identify the problems.
 Enter improper master data, such as prices or employee pay rates, to determine if errors that will
occur repetitively are subjected to greater scrutiny(inspection/analysis) than those causing single error results.
Integration Testin
g Integration testing (sometimes called Integration and Testing, abbreviated "I&T") is

the phase in software testing in which individual software modules are combined and
tested as a group.
 It occurs after unit testing and before system and validation testing.
 Integration testing takes as its input modules that have been unit tested, groups them
in larger aggregates, applies tests defined in an integration test plan to those
aggregates, and delivers as its output the integrated system ready for system
testing.
Integration Testing Types

 Big Bang Integratio


n

 Incremental Integratio
n
 Top Down Integration
 Bottom Up Integration
 Sandwich Testing
Incremental Integration
TOP DOWN INTEGRAT
ION
 Top Down Testing is an approach to integrated testing where the top integrated
modules are tested and the branch of the module is tested step by step until the end of
the related module.
Incremental Integration
TOP DOWN INTEGRATION
Top down integration is performed in a series of steps:

1. The main control module is used as test driver and stubs are substituted for all components directly subordinate to the
main module.

2. Depending on the integration approach selected (depth-first or breadth-first) subordinates stubs are replaced one at a
time with actual components.

3. Test are conducted as each component is integrate


d.
4. On completion of each set of tests another stub is replaced with actual component
.
5. Regression testing may be conducted to make sure that new errors have not been introduce
d.
Incremental Integration

 In depth first approach all modules on a


control path are integrated first. See the fig. on
the right. Here sequence of integration would be
(M1, M2, M3), M4, M5, M6, M7, and M8.

 In breadth first all modules directly


subordinate at each level are integrated
together. Using breadth first for this fig. the
sequence of integration would be (M1, M2, M8),
(M3, M6), M4, M7, andM5.
Incremental Integration
BOTTOM UP INTEGRATION
 Bottom Up Testing is an approach to integrated testing where the lowest
level components are tested first, then used to facilitate the testing of higher level
components. The process is repeated until the component at the top of the hierarchy is
tested.
 All the bottom or low-level modules, procedures or functions are integrated
and then tested. After the integration testing of lower level integrated modules, the next
level of modules will be formed and can be used for integration testing. This approach
is helpful only when all or most of the modules of the same development level are ready.
Incremental Integration
BOTTOM UP INTEGRATION
Bottom up integration is performed in a series of
 steps:

1. Low level components are combined into clusters.


2. A driver (a control program for testing) is written to coordinate test case input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the program structure.
Load Testin
gLoad testing is the process of putting demand on a system or device and measuring its

response. Load testing is performed to determine a system’s behavior under both


normal and anticipated peak load conditions.
 It helps to identify the maximum operating capacity of an application as well as
any bottlenecks and determine which element is causing degradation.

 Example: Using automation software to simulate 500 users logging into a web site and performing end-
user activities at the same time.
 Example: Typing at 120 words per minute for 3 hours into a word processor
.
Stress Testin

g
Stress testing is a form of testing that is used to
determine the stability of a given system or entity.

 It involve testing beyond normal operational


s
capacity, often to a breaking point, in order to
observe the results.

 In stress testing you continually put excessive load on


the system until the system crashes

 The system is repaired and the stress test is


repeated until a level of stress is reached that is
higher than expected to be present at a customer
site.
Regression Testing
 Exposes defects in code that should have not changed.

 Re-executes some or all existing test cases to exercise code that was tested in a previous release
or previous test cycle.

 Performed when previously tested code has been re-linked such as whe
n:
 Ported to a new operating system
 A fix has been made to a specific part of the code.

 Studies shows that


:
 The probability of changing the program correctly on the first try is only 50% if the change
involves 10 or fewer lines of code.
 The probability of changing the program correctly on the first try is only 20% if the change
involves around 50 lines of code.
38 Validation Testing
 Acceptance Testin
g

 Alpha Testin
g

 Beta Testin
g
Acceptance Testin
g
 It is virtually impossible for a software developer to foresee how the customer will really us
a program e

 When custom software is built for one customer, a series of acceptance tests are conducted to
enable the customer to validate all requirements

 Conducted by the end user rather than software engineers

 An acceptance test can range from an informal test drive to a planned and systematically
executed series of tests

 Software developers often distinguish acceptance testing by the system provider from
acceptance testing by the customer (the user or client) prior to accepting transfer of ownership. In
the case of software, acceptance testing performed by the customer is known as user
acceptance testing (UAT), end-user testing, site (acceptance) testing, or field (acceptance)
testing
Alpha Testin
40 g
 In this type of testing, the users are invited at the development center where they use the
application and the developers note every particular input or action carried out by the user.
Any type of abnormal behavior of the system is noted.

 Alpha tests are conducted in a controlled environment


Beta Testin
41 g
 The beta test is conducted at end user sites. Unlike
alpha testing , the developer is generally not present.

 Therefore the beta test is a live application of the


software in an environment that cannot be controlled
by the developer

 In this type of testing, the software is handed over to


the user in order to find out if the software meets the
user expectations and works as it is expected to.

 The end user records all problems that are encountered


during beta testing and reports these to the developer at
regular intervals

 As a result of problems reported during beta tests,


software engineers make modifications an then
d product
prepare for release of the software

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy