0% found this document useful (0 votes)
4 views

sv_testing lntroduction

PEC Software Testing PPT

Uploaded by

mayank.arora1442
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

sv_testing lntroduction

PEC Software Testing PPT

Uploaded by

mayank.arora1442
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 79

SOFTWARE

TESTING
TECHNIQUES
CSR1105- UNIT-I
Course Detail

Course : Software Testing


Name
Course : CSR1105
Code
Credits : 1.5
L T P : 202
Course Objectives:
:
The main objectives of this course are:
1 To study and understand fundamental concepts in
software testing.
2 To understand different testing techniques used in
designing test cases.
3 To understand how developers incrementally develop
and test code.
Course Content
No. of
Sr. Course contents Lectures
No.
1. Overview: Introduction to Test cases, test case 8
design, Levels of testing: module, integration,
system, regression, Structural versus Functional
Technique Categories, Static versus Dynamic Testing,
Control flow & Data flow testing, Regular expressions
in testing, Determining Metrics, Black Box Testing,
White Box Testing, Test Prioritization, Performance,
Load, Stress & Security Testing, Debugging.
2. Object Oriented Testing Object Oriented Testing 6
Issues, OO Testing Methodologies, Analysis and
Design Testing-UML Based, Class Testing, Integration
Testing, Testing Hierarchies.
3. Testing Automation, Software test automation, scope 6
of automation, design and architecture for
automation, requirements for a test tool, challenges
in automation, Regression testing, Test cases
prioritization and minimization, Testing tool..
4. Modern Testing Trends: Modern Testing Principles, 8
Agile, DevOps & CI/CD, AI and ML in Software Testing,
Test driven development, User Experience and
Usability Testing, Robotic Process Automation (RAP)
Testing, Shift Left and Shift Right Testing, IEEE
Standard for testing.
Lab Work

No.
Sr. Lab contents of
No. Hour
s
1. Project to Understand the Automation Test Cases 3
generation.
1. Using any Integrated Development Environment 3
(IDE)
(like Selenium), Write a test suite containing
minimum 4 to 5 test cases.
1. Project for the performance analysis and regression 4
testing of any program.
1. A project to illustrate the use of Object oriented 4
testing.
Error, Mistake, Bug, Fault and Failure

■ An error (or fault) is a design flaw or a deviation from


a desired or intended state.
– An error won’t yield a failure without the
conditions that trigger it. Example, if the program
yields 2+2=5 on the 10th time you use it, you won’t see
the error before or after the 10th use.
■ The failure is the program’s actual incorrect or missing
behavior under the error-triggering conditions.
■ A symptom might be a characteristic of a failure that
helps you recognize that the program has failed.
Some Software failures

■ Ariane 5
– It took the European Space Agency
10 years and $7 billion to produce
Ariane 5, a giant rocket capable of
hurling a pair of three-ton satellites
into orbit with each launch and
intended to give Europe
overwhelming supremacy in the
commercial space business.
– The rocket was destroyed after 39
seconds of its launch, at an altitude
of two and a half miles along with its
payload of four expensive and
uninsured scientific satellites.
Some Software
failures
Y2K problem:

It was simply the ignorance


about the adequacy or otherwise
of using only last two digits of
the year.
The 4 digit date format, like
1964, was shortened to 2 digit
format, like 64.

7
Experience of Windows XP

Released on October 25,2001

Same day company posted 18 megabytes of patches on


its website for bug fixes, compatibility updates and
enhancements.

8
Testing definition
■ “Testing is the process of executing a program with the
intent of finding faults.”
– Why should we test ?
– Who should do testing ?
– What should we test ?
– What do we mean by "complete testing"?
– And many more questions…
What is
this?

A failure

An error

Fault
How do we deal with the errors?

Verification?

Patching?

Testing?
Seven Testing Principles
1. Testing shows the presence of defects, not their
absence
2. Exhaustive testing is impossible
3. Early testing saves time and money
4. Defects cluster together
5. Beware of the pesticide paradox
6. Testing is context dependent
7. Absence-of-errors is a fallacy
How do you test this?
The system shall operate at an input voltage range of nominal 100- 250 VAC.

■ Try it with an input voltage in the range of 100 – 250.

■ Poor Answer
Motivating the Bug Fix
■ Vary the options and settings of the program (change
conditions by changing something about the program
under test).

■ Vary the software and hardware environment (less


memory, a different printer, more device interrupts
coming in etc.)

■ But, Why are there Errors?


– Late design, third party bugs, inherent
complex, miscommunication,
Cost of Finding & Fixing
Errors
■ How much it costs to find the bug
■ How much it costs to fix the bug
■ How much it costs to distribute the bug fix.
CHARACTERISTICS OF A GOOD TEST

■ It has a reasonable probability of catching an


error.
■ It is not redundant.
■ It's the best of its breed.
■ It is neither too simple nor too complex.
Verification & Validation
■ Verification: "Are we building the product right"
– The software should conform to its specification
■ Validation: "Are we building the right product"
– The software should do what the user really
requires
■ V & V must be applied at each stage in the software
process
■ Two principal objectives
– Discovery of defects in a system
– Assessment of whether the system is usable in an
operational situation
Testing Levels

Requir ements System System Detailed


specification specification design design

System Sub-system Module and


Acceptance
integration integration unit code
test plan
test plan test plan and tess

Acceptance System Sub-system


Service
test integration test integration test
Test type & level:
Functional
■ For component testing, tests are designed based on
how a component should calculate compound interest.
■ For component integration testing, tests are designed
based on how account information captured at the user
interface is passed to the business logic.
■ For system testing, tests are designed based on how
account holders can apply for a line of credit on their
checking accounts.
■ For system integration testing, tests are designed
based on how the system uses an external
microservice to check an account holder’s credit score.
■ For acceptance testing, tests are designed based on
how the banker handles approving or declining a credit
application.
Test type & Level: Non-Functional

■ For component testing, performance tests are designed


to evaluate the number of CPU cycles required to
perform a complex total interest calculation.
■ For component integration testing, security tests are
designed for buffer overflow vulnerabilities due to data
passed from the user interface to the business logic.
■ For system testing, portability tests are designed to
check whether the presentation layer works on all
supported browsers and mobile devices.
■ For system integration testing, reliability tests are
designed to evaluate system robustness if the credit
score microservice fails to respond.
■ For acceptance testing, usability tests are designed to
evaluate the accessibility of the banker’s credit
processing interface for people with disabilities.
Test type: White Box
■ For component testing, tests are designed to achieve
complete statement and decision coverage for all
components that perform financial calculations.
■ For component integration testing, tests are designed
to exercise how each screen in the browser interface
passes data to the next screen and to the business
logic.
■ For system testing, tests are designed to cover
sequences of web pages that can occur during a credit
line application.
■ For system integration testing, tests are designed to
exercise all possible inquiry types sent to the credit
score microservice.
■ For acceptance testing, tests are designed to cover all
supported financial data file structures and value
ranges for bank-to-bank transfers.
Functional Testing
Testing is based on the functionality of the program.
Internal structure of the code is ignored.

Black Box Testing

Input Output
test System test
data data

24
EQUIVALENCE CLASSES AND
BOUNDARY VALUES

■ Black box testing


Cases: BVA
■ What extensions or variations are made for boundary
value analysis?

■ Robustness testing
■ Worst case testing
■ Robust worst case testing
Functional Testing
The general functional testing process is:
■ The requirements or specifications are analyzed.
■ Valid inputs are chosen based on the
specification to determine that the SUT
processes them correctly. Invalid inputs must
also be chosen to verify that the SUT detects
them and handles them properly.
■ Expected outputs for those inputs are
determined.
■ Tests are constructed with the selected inputs.
■ The tests are run. Actual outputs are compared
with the expected outputs.

27
Functional Testing
Applicability

■ Black box testing can be applied at all


levels of system development—unit,
integration, system, and acceptance.

28
Boundary Value Analysis

Consider the function F with two input variables x and y.

a≤x≤b
c≤y≤d

x & y are bounded by two intervals [a,b] and [c,d].

29
Boundary Value Analysis
y
d

c
x
a b

Any point within shaded rectangle is a legitimate input to the


function.

30
Boundary Value Analysis

Basic idea is to use input variable values at their

o Minimum <xnom, ymin>,<xnom, ymin+>,


o Just above minimum <xnom, ynom>,<xnom, ymax->,
o A nominal value <xnom, ymax>,<xmin, ynom>,
o Just below their maximum <x , y >,<x , y >,
min+ nom nom nom
o Maximum
<xmax-, ynom>,<xmax, ynom>,

31
Functional Testing
y
d

c
x
a b
Boundary value analysis of test cases for a function of two
variables.

32
Functional Testing

Consider a two input program to multiply numbers.


x and y lies between the following intervals:

100 ≤ x ≤ 300
100 ≤ y ≤ 300

The 9 boundary values test cases are given in Table 1

33
Boundary Value Analysis
Test case x y Expected
output
1 200 100 20000
2 200 101 20200
3 200 200 40000
4 200 299 59800
5 200 300 60000
6 100 200 20000
7 101 200 20200
8 299 200 59800
9 300 200 60000

Table : Test cases for two input program

34
Functional Testing

Thus, for a function of n variables, boundary value analysis


yields

4n+1 test cases

35
Robustness Testing

Extension of boundary value analysis.

We see, what happens when values exceeds greater then


maximum (max+) and value slightly less than minimum (min-)

36
Robustness Testing
y
d

x
a b
Robustness test cases for a function of two variables.

37
Robustness Testing

Basic idea is to use input variable values at their

o Minimum
o Just above minimum
o Just below minimum
o Nominal value
o Just below maximum
o Just above maximum
o Maximum

38
Robustness Testing

Consider the program for multiplication of two input numbers

100 ≤ x ≤ 300
100 ≤ y ≤ 300
The robust test cases are given in Table 2

39
Robustness Testing
Test case x y Expected output

1 200 99 Invalid input

2 200 100 20000

3 200 101 20200

4 200 200 40000

5 200 299 59800

6 200 300 60000

7 200 301 Invalid input

8 99 200 Invalid input

9 100 200 20000

10 101 200 20200

11 299 200 59800

12 300 200 60000

13 301 200 Invalid input

Table 2: Robust Test cases for two input


program 40
Worst Case Testing
We reject single fault assumption of reliability theory.
Rejecting this means that we are interested in what happens
when more than one variable has an extreme value.
It is more thorough in the sense that boundary value test
cases are a proper subset of worst case test cases.

min
min+ 5n test cases

nom
4n+1 test cases
max -

max
41
Worst Case Testing
y
d

x
a b
Worst test cases for a function of two variables

42
Worst Case Testing
Table 3: Worst cases test inputs for two variables example
Test case Inputs Test case Inputs
number number
x y x y

1 100 100 14 200 299


2 100 101 15 200 300
3 100 200 16 299 100
4 100 299 17 299 101
5 100 300 18 299 200
6 101 100 19 299 299
7 101 101 20 299 300
8 101 200 21 300 100
9 101 299 22 300 101
10 101 300 23 300 200
11 200 100 24 300 299
12 200 101 25 300 300
13 200 200 --

43
Robust Worst Case Testing
y
d

c
x
a b

Robust worst case test cases for a function of two variables.

44
Robust Worst Case Testing

• Considers multiple fault theory


• Invalid inputs are taken into consideration
• Total test cases are 7n
• Constructs largest set of test cases and requires
maximum effort to generate these test cases

45
Robust Worst Case Testing

Consider the program for multiplication of two input


numbers

100 ≤ x ≤ 300
100 ≤ y ≤ 300
The 72 = 49 robust worst test cases.

46
Limitations - BVA
■ Does not work well for Boolean variables

■ Does not work well for logical variables -PIN,


transaction type

■ When variables are not independent


Functional Testing
Equivalence Class Testing
If we expect the same result from two test case, we
consider them equivalent.
A group of test cases forms an equivalent class if

They all test the same thing


If one test catches a bug, the others probably will too.
If one test does not catch a bug, the others probably would
not either.

48
Functional Testing

Subsets are determined by an equivalence relation, the


elements of a subset have something in common.

The idea is to identify test cases by using one element


from each equivalence class.

49
Example : Equivalence Class
Testing
Consider a simple program to classify a triangle. Its inputs is a
triple of positive integers (say x, y, z) and the date type for
input parameters ensures that these will be integers greater than
0 and less than or equal to 100. The program output may be one
of the following words:
[Scalene; Isosceles; Equilateral; Not a triangle]
Design the equivalence class test cases.

50
Example : Equivalence Class
Solution
Testing
Output domain equivalence classes are:
O1={<x,y,z>: Equilateral triangle with sides x,y,z}
O2={<x,y,z>: Isosceles triangle with sides x,y,z}
O3={<x,y,z>: Scalene triangle with sides x,y,z}
O4={<x,y,z>: Not a triangle with sides x,y,z}
The test cases are:
Test case x y z Expected Output

1 50 50 50 Equilateral
2 50 50 99 Isosceles

3 100 99 50 Scalene
4 50 100 50 Not a triangle
51
Example : Equivalence Class
Testing
Input domain based classes are:

I1={x: x < 1}
I2={x: x > 100}
I3={x: 1 ≤ x ≤ 100}
I4={y: y < 1}
I5={y: y > 100}
I6={y: 1 ≤ y ≤ 100}
I7={z: z < 1}
I8={z: z > 100}
I9={z: 1 ≤ z ≤ 100} 52
Example : Equivalence Class
Testing
Some inputs domain test cases can be obtained using the relationship
amongst x,y and z.
I10={< x,y,z >: x = y = z}
I11={< x,y,z >: x = y, x ≠ z}
I12={< x,y,z >: x = z, x ≠ y}
I13={< x,y,z >: y = z, x ≠ y}
I14={< x,y,z >: x ≠ y, x ≠ z, y ≠ z}
I15={< x,y,z >: x = y + z}
I16={< x,y,z >: x > y +z}
I17={< x,y,z >: y = x +z}
I18={< x,y,z >: y > x + z}
I19={< x,y,z >: z = x + y}
I20={< x,y,z >: z > x +y} 53
Example : Equivalence Class
Testing
Test cases derived from input domain are:
Test case x y z Expected Output

1 0 50 50 Invalid input
2 101 50 50 Invalid input

3 50 50 50 Equilateral
4 50 0 50 Invalid input
5 50 101 50 Invalid input
6 50 50 50 Equilateral
7 50 50 0 Invalid input
8 50 50 101 Invalid input
9 50 50 50 Equilateral
10 60 60 60 Equilateral
11 50 50 60 Isosceles

12 50 60 50 Isosceles
13 60 50 50 Isosceles
54
Example : Equivalence Class
Testing

Test case x y z Expected Output

14 100 99 50 Scalene
15 100 50 50 Not a triangle
16 100 50 25 Not a triangle
17 50 100 50 Not a triangle
18 50 100 25 Not a triangle
19 50 50 100 Not a triangle
20 25 50 100 Not a triangle

55
EQUIVALENCE CLASSES
■ Don't forget equivalence classes for invalid inputs.
■ Organize your classifications into a table or an outline.
■ Look for ranges of numbers.
■ Analyze responses to lists and menus.
■ Look for variables that must be equal.
■ Create time-determined equivalence classes.
■ Look for variable groups that must calculate to a
certain value or range.
■ Look for equivalent output events.
■ Look for equivalent operating environments.
BVA and ECP
■ Boundary Value Testing derives test cases with
– Serious gaps
– Massive redundancy

■ What are the motivations for equivalence class


testing?
– Avoid redundancy - Have fewer test cases
– Complete testing - Remove gaps

■ How do equivalence classes meet the


motivations of complete testing and avoiding
redundancy?

■ What variations are used for equivalence class


testing?
Decision Table-Based
Testing
■ Associate conditions with actions to perform
■ Limited Entry and Extended Entry Decision Tables
Decision Table-Based
Testing
■ How are condition entries in a decision table
interpreted with respect to a program?
– Input, Equivalence classes of inputs

■ How are action entries in a decision table


interpreted with respect to a program?
– Output, Major functional processing portions

■ Do not care conditions


Decision Table-Based
Testing
■ Don't care entries reduce the number of explicit rules
by implying the existence of non-explicitly stated rules.
Decision Table-Based
Testing
■ A supermarket has a loyalty scheme that is offered to all
customers. Loyalty card holders enjoy the benefits of
either additional discounts on all purchases or the
acquisition of loyalty points, which can be converted into
vouchers for the supermarket or to equivalent points in
schemes run by partners. Customer without a loyalty
card receive an additional discount only if they spend
more than $100 on any one visit to the store, otherwise
only the special offers offered to all customers apply.
Decision Table-Based
Testing Rule 1 Rule 2 Rule 3 Rule 4

Conditions

Customer without loyalty card T T F F


Customer with loyalty card F F T T
Extra discount selected - - T F
Spend > $100 F T - -
Actions

No discount T F F F
Extra discount F T T F
Loyalty points F F F T
Decision Table-Based
Testing
■ What benefit do we get from using decision tables in
place of equivalence classes?

■ The order of rule evaluation has no effect on resulting


action .
■ Once a rule is satisfied and the action selected, no
other rule need be examined.
■ The order of executing actions in a satisfied rule is of
no consequence.
Structural Testing
■ built from the program source code
■ A node = a maximal block of consecutive
statements i1, … in
■ edges between nodes = conditional or
unconditional branching
■ Based on:
– Control Flow Graph
– CFG+ Data flow annotations
Control Flow based
Analysis
■ Statement coverage - every statement (i.e. all nodes
in the programs control flow graph) is executed at least
once.
■ All-Paths coverage - every possible control flow path
through the program is traversed at least once.
Equivalent to an exhaustive test of the program.
■ Branch coverage - for all decision points (e.g. if and
switch) every possible branch is taken at least once.
■ Multiple-predicate coverage Boolean expressions
may contain embedded branching (e.g. A && ( B ||
C ) ). Multiple-predicate coverage requires testing each
Boolean expression for all possible combinations of the
elementary predicates.
■ Cyclomatic number coverage All linearly
independent paths through the control flow graph are
tested.
Control Flow based
Analysis
Control Flow based
Analysis
Data Flow Analysis
■ For each variable v defined in node i, selection of
subpaths between this definition and one or
several subsequent uses of v.

■ Node i
– Def (i) = set of variables defined at node i,
which can be used externally to i
– C-use (i) = set of variables used in a
calculus at node i, not defined in i.
■ Edge (i,j)
– P-use (i, j) = set of variables appearing in
the predicate conditioning transfer of
control from i to j
Data Objects
■ (d) Defined, Created, Initialized
■ (k) Killed, Undefined, Released
■ (u) Used: – (c) Used in a calculation – (p) Used in a
predicate
Data Flow Analysis
Data Flow Analysis
■ All definitions
– Selection of a subpath for each variable
definition, for some use (equally in a calculus
or predicate)
■ All C-uses / some P-Uses (resp. all P-uses / some
C-Uses)
– Use in calculation (resp. in predicate) is
favored
■ All Uses
– Selection of a subpath for each use
■ All DU paths
– Selection of all possible subpaths without
iteration between definition and each use
Ordering of criteria
Web Application Testing
■ Content evaluated at both syntactic and semantic levels
■ Function tested to uncover lack of conformance to
requirements
■ Structure is assessed to ensure proper content and
function are delivered
■ Usability is tested to ensure that each category of user can
be supported as new content or functionality is added
■ Navigability is tested to ensure that all navigation syntax
and semantics are exercised
■ Performance is tested under a variety of operating
conditions, configurations, and loading to ensure a
reasonable level of user response
■ Compatibility tested by executing WebApp using a variety
of client and server configurations
■ Security is tested by assessing potential vulnerabilities and
trying to exploit each of them
Performance Testing
■ Tests user response time With web applications
■ Load Testing
– Tests the database – Largest load the database can
handle at one time
■ Stress testing
– Tests the server – Peak volume over a short span of
time
Examples
■ Web Applications (online systems)
■ Data base systems
■ File exchange
■ Disk space

– Multiple users logging into the system at one time (100 users
log in at one time)
– Users logging into the system very rapidly (e.g. 1 user every
second)
– Extended concurrency times (25 users remain logged into
system running heavy transactions for an extended period)
Stress Testing-
Checkpoints
■ Warnings and error messages, do they come at all and
are they helpful and understandable?
■ Are data lost or corrupted?
■ Is the system too slow?
■ Do timeouts happen?
■ Does the system crash?
■ Are really ALL data processed and stored? Even the
end of files, messages or data elements?
■ Are data overwritten without warning?
■ Does the system prioritize in a reasonable way at
overload situations? Are prioritized services still
available?
Stress Testing
■ Tests the breaking point of one system by
overwhelming resources.
■ Ensures that fails have a graceful recovery
■ Features
– Flexible scenario and dynamic data generation
– Real world load simulation
– Distributed stress testing
– Reports
Load Testing
■ Ensures application can handle load
conditions
■ Tests to make sure that speed is not
sacrificed should multiple users access
application
■ End user experience testing
◦ Run full-scale web load test while a subset of users logs
into system to conduct normal work activity
◦ Have a subset of end users log into the system in the
middle of load test to gauge performance
■ Transaction response testing
◦ Order entry must complete within 8 seconds
◦ AP query must return results within 5 seconds
◦ PDF attachment must upload within 5 seconds
Load Testing

■ Determine combinations of N, T, and


D that cause performance to
degrade
N = number of concurrent users
T = number of on-line transactions per
unit of time
D = data load processed by server per
transaction
■ Overall through put is computed
using the equation
P=N*T*D 79
Difference
■ One difference between a load test and stress test is
that you may inject pauses into a load test to simulate
real user traffic. With a stress test, you may run as
many simultaneous users as fast as possible to
generate excessive traffic for a stress test.
■ load test is performed in order to ensure that a website
or web application is capable of handling specific
numbers of users at once. A stress test is used to
specifically push a system beyond its intended capacity
to identify components that begin to slow down,
identify bottlenecks in the system, and bring to light
possible points of failure.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy