Software Testing Lect 2-3

Download as pdf or txt
Download as pdf or txt
You are on page 1of 34

Overview

Basics of Testing
Testing & Debugging Activities
Testing Strategies
Black-Box Testing
White-Box Testing

Testing in the Development Process

Unit Test
Integration Test
System Test
Acceptance Test
Regression Test

Practical Considerations
1

Static and dynamic V&V


Static
verification

Requirements
specification

Architecture

Detailed
design

Implementation
V1

Implementation
V2

Prototype
Dynamic
V &V

Special cases

Executable specifications

Animation of formal specs


2

Testing in the V-Model


Requirements
Architectural
Design
Detailed
Design
Module
implementation

Acceptance
test

Customer
Developer

System test
Integration test

Unit test

Functional
(BB)
Structural
(WB)
3

Testing stages
Unit testing
Testing of individual components

Integration testing
Testing to expose problems arising from the
combination of components

System testing
Testing the complete system prior to delivery

Acceptance testing
Testing by users to check that the system satisfies
requirements. Sometimes called alpha testing
4

Types of testing
Statistical testing
Tests designed to reflect the frequency of user
inputs. Used for reliability estimation.
Covered in section on Software reliability.

Defect testing
Tests designed to discover system defects.
A successful defect test is one which reveals
the presence of defects in a system.
5
Ian Sommerville 1995

Some Terminology
Failure
A failure is said to occur whenever the external
behavior does not conform to system spec.

Error
An error is a state of the system which, in the
absence of any corrective action, could lead to a
failure.

Fault
An adjudged cause of an error.
6

Some Terminology
It is there in the program

fault, bug, error,


defect

Fault
Program state
Error
Observed
Failure
7

Testing Activities
Subsystem
Code
Subsystem
Code

Unit
Test
Unit
Test

Tested
Subsystem

Tested
Subsystem

Requirements
Analysis
Document

System
Design
Document

Integration
Test
Integrated
Subsystems

Functional
Test

User
Manual

Functioning
System

Tested Subsystem

Subsystem
Code

Unit
Test
All tests by developer
8

Testing Activities continued


Global
Requirements
Validated
Functioning
System PerformanceSystem

Test

Clients
Understanding
of Requirements
Accepted
System

Acceptance
Test

Tests by client

User
Environment

Installation
Test
Usable
System

Tests by developer
Users understanding

System in
Use
Tests (?) by user

Testing and debugging


Defect testing and debugging are distinct
processes
Defect testing is concerned with confirming the
presence of errors
Debugging is concerned with locating and
repairing these errors
Debugging involves formulating a hypothesis
about program behaviour then testing these
hypotheses to find the system error
10
Ian Sommerville 1995

Debugging Activities
Locate error
& fault
Design fault
repair
Repair fault
Re-test
program
11
Ian Sommerville 1995

Testing Activities
Identify

Test conditions (What): an item or event to be verified.

Design

How the what can be tested: realization


Build

Build test cases (imp. scripts, data)


Run the system

Execute

Test case outcome with


Compare expected outcome

Test result
12

Goodness of test cases


Exec. of a test case against a program P
Covers certain requirements of P;
Covers certain parts of Ps functionality;
Covers certain parts of Ps internal logic.

Idea of coverage guides test case


selection.

13

Black-box Testing
Focus: I/O behavior. If for any given input, we can predict
the output, then the module passes the test.
Almost always impossible to generate all possible inputs ("test
cases")

Goal: Reduce number of test cases by equivalence


partitioning:
Divide input conditions into equivalence classes
Choose test cases for each equivalence class. (Example: If an object
is supposed to accept a negative number, testing one negative
number is enough)
14

White-box Testing
Statement Testing (Algebraic Testing): Test single statements
(Choice of operators in polynomials, etc)
Loop Testing:
Cause execution of the loop to be skipped completely. (Exception:
Repeat loops)
Loop to be executed exactly once
Loop to be executed more than once

Path testing:
Make sure all paths in the program are executed

Branch Testing (Conditional Testing): Make sure that each


possible outcome from a condition is tested at least once
if ( i = TRUE) printf("YES\n");else printf("NO\n");
Test cases: 1) i = TRUE; 2) i = FALSE
15

Code Coverage
Statement coverage
Elementary statements: assignment, I/O, call
Select a test set T such that by executing P in all
cases in T, each statement of P is executed at
least once.
read(x); read(y);
if x > 0 then write(1);
else
write(2);
if y > 0 then write(3);
else
write(4);
T: {<x = -13, y = 51>, <x = 2, y = -3>}
16

White-box Testing: Determining the Paths


FindMean (FILE ScoreFile)
{ float SumOfScores = 0.0;
int NumberOfScores = 0;
1
float Mean=0.0; float Score;
Read(ScoreFile, Score);
2 while (! EOF(ScoreFile) {
3 if (Score > 0.0 ) {
SumOfScores = SumOfScores + Score;
NumberOfScores++;
}
5
Read(ScoreFile, Score);

}
/* Compute the mean and print the result */
7 if (NumberOfScores > 0) {
Mean = SumOfScores / NumberOfScores;
printf( The mean score is %f\n, Mean);
} else
printf (No scores found in file\n);
9
}

17

Constructing the Logic Flow Diagram

18

Unit Testing
Objective: Find differences between specified units and their imps.
Unit: component ( module, function, class, objects, )
Unit test environment:
Driver

Test cases

Unit under
test

Effectiveness?

Test result

Stub

Stub

Partitioning
Code coverage

Dummy modules
19

Integration Testing
Objectives:
To expose problems arising from the combination
To quickly obtain a working solution from components.

Problem areas
Internal: between components
Invocation: call/message passing/
Parameters: type, number, order, value
Invocation return: identity (who?), type, sequence

External:
Interrupts (wrong handler?)
I/O timing

Interaction

20

Integration Testing
Types of integration
Structural
Big bang no error localization
Bottom-up: terminal, driver/module, (driver
module)
Top-down: top, stubs, (stub module), early demo

Behavioral
(next slide)
21

Integration Testing
(Behavioral: Path-Based)
A

MM-path: Interleaved sequence of module exec path and messages


Module exec path: entry-exit path in the same module
Atomic System Function: port input, {MM-paths}, port output
Test cases: exercise ASFs

22

System Testing
Concerns with the apps externals
Much more than functional

Load/stress testing
Usability testing
Performance testing
Resource testing

23

System Testing
Functional testing
Objective: Assess whether the app does what it
is supposed to do
Basis: Behavioral/functional specification
Test case: A sequence of ASFs (thread)

24

System Testing
Functional testing: coverage
Event-based coverage

PI1: each port input event occurs


PI2: common sequences of port input event occurs
PI3: each port input in every relevant data context
PI4: for a given context, all possible input events
PO1: each port output event
PO2: each port output event occurs for each cause

Data-based
DM1: Exercise cardinality of every relationship
DM2: Exercise (functional) dependencies among relationships
25

System Testing
Stress testing: push it to its limit + beyond
Volume
Users
:

Application
(System)

response

rate
Resources: phy. + logical
26

System Testing
Performance testing
Performance seen by
users: delay, throughput
System owner: memory, CPU, comm

Performance
Explicitly specified or expected to do well
Unspecified find the limit

Usability testing
Human element in system operation
GUI, messages, reports,
27

Test Stopping Criteria


Meet deadline, exhaust budget,
management
Achieved desired coverage
Achieved desired level failure intensity

28

Acceptance Testing

Purpose: ensure that end users are satisfied


Basis: user expectations (documented or not)
Environment: real
Performed: for and by end users (commissioned
projects)
Test cases:
May reuse from system test
Designed by end users
29

Regression Testing
Whenever a system is modified (fixing a bug,
adding functionality, etc.), the entire test suite
needs to be rerun
Make sure that features that already worked are not
affected by the change

Automatic re-testing before checking in changes


into a code repository
Incremental testing strategies for big systems
30

Comparison of White & Black-box


Testing 25.1.2002
White-box Testing:
Potentially infinite number of paths
have to be tested
White-box testing often tests what
is done, instead of what should be
done
Cannot detect missing use cases

Black-box Testing:
Potential combinatorical explosion
of test cases (valid & invalid data)
Often not clear whether the selected
test cases uncover a particular error
Does not discover extraneous use
cases ("features")

Both types of testing are needed


White-box testing and black box
testing are the extreme ends of a
testing continuum.
Any choice of test case lies in
between and depends on the
following:

Number of possible logical paths


Nature of input data
Amount of computation
Complexity of algorithms and data
structures

31

The 4 Testing Steps


1. Select what has to be
measured
Analysis: Completeness of
requirements
Design: tested for cohesion
Implementation: Code tests

2. Decide how the testing is


done

Code inspection
Proofs (Design by Contract)
Black-box, white box,
Select integration testing
strategy (big bang, bottom up,
top down, sandwich)

3. Develop test cases


A test case is a set of test data
or situations that will be used
to exercise the unit (code,
module, system) being tested
or about the attribute being
measured

4. Create the test oracle


An oracle contains of the
predicted results for a set of
test cases
The test oracle has to be
written down before the actual
testing takes place
32

Guidance for Test Case Selection


Use analysis knowledge
about functional
requirements (black-box
testing):
Use cases
Expected input data
Invalid input data

Use design knowledge about


system structure, algorithms,
data structures (white-box
testing):

Use implementation
knowledge about algorithms:
Examples:
Force division by zero
Use sequence of test cases for
interrupt handler

Control structures
Test branches, loops, ...

Data structures
Test records fields, arrays, ...

33

Unit-testing Heuristics
1. Create unit tests as soon as object
design is completed:
Black-box test: Test the use
cases & functional model
White-box test: Test the
dynamic model
Data-structure test: Test the
object model
2. Develop the test cases
Goal: Find the minimal
number of test cases to cover
as many paths as possible
3. Cross-check the test cases to
eliminate duplicates
Don't waste your time!

4. Desk check your source code


Reduces testing time
5. Create a test harness
Test drivers and test stubs are
needed for integration testing
6. Describe the test oracle
Often the result of the first
successfully executed test
7. Execute the test cases
Dont forget regression testing
Re-execute test cases every time a
change is made.
8. Compare the results of the test with the
test oracle
Automate as much as possible

34

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy