Software Testing - 2012
Software Testing - 2012
Software Testing - 2012
Yogesh Singh
978-1-107-01296-7 Hardback
Contents
List of Figures
List of Tables
Preface
Acknowledgements
1. Introduction
1.1
1.2
1.3
xi
xv
xxi
xxiii
1
1
1
2
3
3
3
3
4
4
5
14
15
16
19
19
20
21
21
22
22
23
iv Contents
2. Functional Testing
2.1
What is a Graph?
3.1.1 Degree of a Node
3.1.2 Regular Graph
23
23
24
24
24
24
26
26
27
27
28
34
35
37
38
43
44
46
48
63
63
65
81
81
82
82
83
83
96
97
97
97
99
99
99
102
105
108
110
110
112
113
Contents v
3.2
4. Structural Testing
4.1
5.
113
114
114
116
117
117
123
124
127
144
144
150
159
161
163
165
165
166
167
167
167
173
174
174
175
175
176
197
197
198
202
212
212
216
216
223
226
228
0
5.1
Verification Methods
5.1.1 Peer Reviews
230
231
vi Contents
5.1.2 Walkthroughs
5.1.3 Inspections
5.1.4 Applications
5.2 Software Requirements Specification (SRS) Document Verification
5.2.1 Nature of the SRS Document
5.2.2 Characteristics and Organization of the SRS Document
5.2.3 SRS Document Checklist
5.3 Software Design Description (SDD) Document Verification
5.3.1 Organization of the SDD Document
5.3.2 The SDD Document Checklist
5.4 Source Code Reviews
5.4.1 Issues Related to Source Code Reviews
5.4.2 Checklist of Source Code Reviews
5.5 User Documentation Verification
5.5.1 Review Process Issues
5.5.2 User Documentation Checklist
5.6 Software Project Audit
5.6.1 Relevance Scale
5.6.2 Theory and Practice Scale
5.6.3 Project Audit and Review Checklist
5.7 Case Study
Multiple Choice Questions
Exercises
Further Reading
6.2
6.3
6.4
231
231
232
233
233
233
235
238
239
239
241
241
242
243
244
244
245
246
246
246
257
279
282
283
285
285
286
287
288
290
292
293
294
295
296
296
296
316
316
316
316
316
316
317
317
Contents vii
7.
8.2
8.3
Levels of Testing
8.1.1 Unit Testing
8.1.2 Integration Testing
8.1.3 System Testing
8.1.4 Acceptance Testing
Debugging
8.2.1 Why Debugging is so Difficult?
8.2.2 Debugging Process
8.2.3 Debugging Approaches
8.2.4 Debugging Tools
Software Testing Tools
8.3.1 Static Software Testing Tools
322
322
326
331
333
334
335
335
336
337
339
339
339
339
340
340
341
342
342
343
346
347
347
352
363
364
365
368
368
369
370
373
373
374
374
375
377
378
379
379
viii
Contents
381
382
382
383
386
387
389
389
390
391
393
394
394
395
395
395
396
396
400
401
404
404
406
407
408
411
412
412
412
415
417
418
420
420
420
421
422
422
423
423
Contents ix
424
424
425
426
427
427
428
429
429
429
430
430
437
442
446
449
451
453
453
453
455
456
458
458
459
461
463
463
464
465
468
469
469
469
470
470
471
476
476
479
479
480
482
Contents
485
486
490
492
494
494
495
495
496
496
496
501
502
503
503
504
505
505
511
512
513
514
Appendix I
Appendix II
Appendix III
References
Answers to Multiple Choice Questions
Index
517
541
594
612
617
621
List of Figures
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
1.10
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
2.10
2.11
2.12
2.13
2.14
2.15
2.16
3.1
3.2
3.3
3.4
3.5
3.6
3.7
3.8
5
11
13
15
18
19
19
20
25
27
37
38
39
39
40
44
45
46
64
65
96
97
98
99
100
101
111
111
112
113
117
118
118
119
xii
List of Figures
3.9
3.10
3.11
3.12
3.13
3.14
3.15
3.16
3.17
3.18
3.19
3.20
3.21
3.22
3.23
3.24
3.25
3.26
3.27
3.28
3.29
3.30
3.31
3.32
3.33
3.34
3.35
4.1
4.2
4.3
4.4
4.5
4.6
4.7
4.8
4.9
6.1
6.2
6.3
6.4
6.5
6.6
124
125
126
126
127
128
130
130
132
134
134
136
139
140
143
145
146
151
152
153
154
155
156
156
157
158
158
166
198
199
199
202
207
214
214
215
288
289
293
294
301
318
320
List of Figures
6.8
6.9
7.1
7.2
7.3
7.4
7.5
7.6
7.7
8.1
8.2
8.3
8.4
9.1
9.2
9.3
9.4
9.5
9.6
9.7
9.8
9.9
9.10
9.11
9.12
9.13
9.14
9.15
9.16
9.17
10.1
10.2
10.3
10.4
10.5
11.1
11.2
11.3
11.4
11.5
11.6
11.7
11.8
11.9
xiii
322
324
337
338
343
344
345
345
346
369
370
371
372
390
391
392
393
397
399
400
400
403
403
404
405
406
407
410
413
414
431
431
432
442
446
454
454
455
456
460
464
464
468
468
xiv
List of Figures
11.10
11.11
11.12
11.13
12.1
12.2
12.3
12.4
12.5
II-1
II-2
III-1
III-2
III-3
III-4
III-5
III-6
III-7
III-8
III-9
III-10
Security threats
Working of a firewall
Post deployment testing questionnaire
Typical post deployment testing procedure of a web application
Program for determination of nature of roots of a quadratic equation
Program graph of program given in Figure 12.1
A typical program
Flow chart of various steps of genetic algorithm
Program to divide two numbers
Basic and alternative flows for maintain faculty details use case (a) Add a faculty
(b) Edit a faculty (c) Delete a faculty (d) View a faculty
Basic and alternative flows for maintain registration details use case
(a) Add student registration details (b) Edit student registration details
Validity checks for scheme form
Test case with actual data values for the scheme form
Validity checks for paper form
Test case with actual data values for paper form
Validity checks for student form
Test case with actual data values for the student form
Validity checks for faculty form
Test case with actual data values for the faculty form
Validity checks for maintain registration details
Test case with actual data values for the student registration form
472
473
484
484
497
498
501
506
507
577
587
595
597
598
600
601
603
604
605
607
611
List of Tables
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
2.10
2.11
2.12
2.13
2.14
2.15
2.16
2.17
2.18
2.19
2.20
2.21
2.22
2.23
2.24
2.25
6
7
8
9
12
16
21
25
26
38
40
41
41
42
43
44
45
47
48
48
51
52
55
55
59
59
63
64
65
65
66
67
68
70
xvi
List of Tables
2.26
2.27
2.28
2.29
2.30
2.31
2.32
2.33
2.34
2.35
2.36
2.37
2.38
2.39
2.40
2.41
2.42
2.43
2.44
2.45
2.46
2.47
3.1
3.2
3.3
3.4
3.5
3.6
4.1
4.2
4.3
4.4
4.5
4.6
4.7
4.8
4.9
4.10
4.11
4.12
4.13
4.14
4.15
4.16
4.17
71
73
74
78
81
82
82
84
84
85
86
87
88
89
90
90
92
94
95
100
100
101
116
117
127
131
135
141
168
170
171
172
173
180
180
182
183
184
185
185
186
189
192
195
196
List of Tables
4.18 Test cases using program slices of program to find the largest among
three numbers
4.19 Test cases using program slices
4.20 Test cases using program slices
4.21 Test cases using program slices
4.22 Mutated statements
4.23 Actual output of mutant M1
4.24 Actual output of mutant M2
4.25 Actual output of mutant M3
4.26 Actual output of mutant M4
4.27 Actual output of mutant M5
4.28 Additional test case
4.29 Output of added test case
4.30 Revised test suite
4.31 Test suite A
4.32 Test suite B
4.33 Mutated lines
4.34 Actual output of M1(A)
4.35 Actual output of M2(A)
4.36 Actual output of M3(A)
4.37 Actual output of M4(A)
4.38 Actual output of M5(A)
4.39 Actual output of M1(B)
4.40 Actual output of M2(B)
4.41 Actual output of M3(B)
4.42 Actual output of M4(B)
4.43 Actual output of M5(B)
4.44 Additional test case
4.45 Revised test suite B
5.1 Comparison of verification methods
5.2 Organization of the SRS [IEEE98a]
5.3 Checklist for the SRS document
5.4 Checklist for the SDD Document
5.5 Source code reviews checklist
5.6 User Documentation checklist
6.1 Jacobsons use case template
6.2 Alternative use case template
6.3 Scenario matrix for the flow of events shown in Figure 6.3
6.4 Scenario matrix for the login use case
6.5 A typical test case matrix
6.6 Test case matrix for the login use case
6.7 Test case matrix with actual data values for the login use case
6.8 Scenario matrix for the maintain school details use case
6.9 Test case matrix for the maintain school details use case
6.10 Test case matrix with actual data values for the maintain school use case
6.11 Scenario matrix for the maintain programme details use case
xvii
203
207
209
211
217
217
217
218
218
218
218
218
219
219
219
219
220
220
220
220
221
221
221
222
222
222
222
223
232
234
236
239
242
244
290
291
294
295
296
297
298
302
303
305
308
xviii
6.12
6.13
6.14
6.15
6.16
6.17
6.18
6.19
6.20
6.21
6.22
7.1
7.2
7.3
7.4
7.5
7.6
7.7
7.8
7.9
7.10
7.11
7.12
7.13
7.14
7.15
7.16
7.17
7.18
7.19
7.20
9.1
9.2
9.3
9.4
9.5
9.6
9.7
9.8
9.9
9.10
9.11
9.12
9.13
10.1
List of Tables
Test case matrix for the maintain programme details use case
Test case matrix with actual data values for the programme use case
Validity checks for login form
Test case with actual data values for the login form
Validity checks for change password form
Test case with actual data values for the Change Password form
Validity checks for school form
Test case with actual data values for the school form
Validity checks for program form
Test case with actual data values for the program form
Operations of school details form
Comparison of regression and development testing
Test suite for program given in Figure 7.2
Risk analysis table
Risk analysis table of University Registration System
Variables used by modification algorithm
Test cases with execution history
Test cases with number of matches found
Test cases in decreasing order of number of modified lines covered
Test cases in descending order of number of matches found (iteration 2)
Test cases in descending order of number of matches found (iteration 3)
Test cases in descending order of number of matches found (iteration 4)
Test cases in descending order of number of matches found (iteration 5)
Variables used by deletion algorithm
Test cases with execution history
Modified execution history after deleting line numbers 4, 7 and 15
Redundant test cases
Modified table after removing T1 and T5
Test cases with modified lines
Test cases in descending order of number of modified lines covered
Test cases in descending order of number of modified lines
covered (iteration 2)
Symbols of an activity diagram
Test cases for validate function
Test cases for calculate function
Test cases of activity diagram in Figure 9.10
Test cases of activity diagram in Figure 9.11
Terminologies used in state chart diagram
State transition table for stack class
Test cases for class stack
Illegal test case for class stack
Test cases of withdrawal from ATM
Test cases of function push()
Test cases of function pop()
Test cases for function withdraw()
Coupling metrics
309
313
318
319
320
321
323
323
324
325
326
336
338
342
343
347
348
349
350
350
351
351
351
352
352
353
354
355
355
355
355
398
401
402
403
404
405
408
409
409
411
413
414
414
424
List of Tables
10.2
10.3
10.4
10.5
10.6
10.7
10.8
10.9
10.10
10.11
10.12
10.13
10.14
10.15
10.16
10.17
10.18
10.19
11.1
11.2
11.3
11.4
11.5
11.6
11.7
11.8
11.9
11.10
11.11
11.12
11.13
12.1
12.2
12.3
12.4
12.5
II-1
II-2
II-3
II-4
II-5
Cohesion metrics
Inheritance metrics
Size metrics
Time-based failure specification
Failure-based failure specification
Distribution of faults and faulty classes at high, medium and low
severity levels
Descriptive statistics for metrics
Correlations among metrics
High severity faults model statistics
Medium severity faults model statistics
Low severity faults model statistics
Ungraded severity faults model statistics
Confusion matrix
Results of 10-cross validation of models
ANN summary
Rotated principal components
Validation results of ANN model
Analysis of model evaluation accuracy
Comparison of client/server and web based applications
Sample functional test cases of order process of an online shopping
web application
Navigation testing test cases for online shopping website
Test cases of registration form of an online shopping web application
Checklist for testing user interfaces
Web application usability attributes
Browsers compatibility matrix
Configuration and compatibility testing checklist
Security testing checklist
Load testing metrics
Performance testing checklist
Sample database test cases
Web metrics
Constraints and values of paths (feasible/not feasible) of program given
in Figure 12.1
Examples of one point crossover operator
Examples of two point crossover operator
Chromosomes with fitness values for initial population
Automated test data generation tools
Scenario matrix for the maintain scheme details use case
Test case matrix for the maintain scheme details use case
Test case matrix with actual data values for the maintain scheme details
use case
Scenario matrix for the maintain paper details use case
Test case matrix for the maintain paper details use case
xix
425
425
426
427
428
438
438
439
439
440
440
440
440
441
443
444
445
445
454
457
459
460
461
463
470
471
474
477
479
481
485
499
503
504
505
511
544
545
549
555
556
xx List of Tables
II-6
II-7
II-8
II-9
II-10
II-11
II-12
II-13
II-14
II-15
Test case matrix with actual data values for the maintain paper details
use case
Scenario matrix for the maintain student details use case
Test case matrix for the maintain student details use case
Test case matrix with actual data values for the maintain student details
use case
Scenario matrix for the maintain faculty details use case
Test case matrix for the maintain faculty details use case
Test case matrix with actual data values for the maintain faculty details
use case
Scenario matrix for the maintain registration details use case
Test case matrix for the maintain registration details use case
Test case matrix with actual data values for the maintain registration details
use case
560
566
567
571
578
579
582
587
588
591
Preface
xxii
Preface
The work for this book was primarily collected from the authors several years of teaching.
Therefore, the text has been thoroughly tested in classroom and revised accordingly in the form
of this textbook. The book contains numerous solved examples and each chapter ends with
multiple choice questions and self-assessment Exercises. The answers to multiple choice
questions have also been provided for verification. An Instructor Manual for teachers is also
available on the website to provide guidance in teaching software testing.
I do realize the importance of feedback of our readers for continuous improvement in the
contents of the book. I shall appreciate the constructive criticism about anything and also about
any omission due to my ignorance. It is expected that the book will be a useful addition in the
literature on software testing. Any suggestion about the book would gratefully be received.
Yogesh Singh
Acknowledgements
This book is the result of hardwork of Dr Ruchika Malhotra, Assistant Professor, Department
of Software Engineering, Delhi Technological University, Delhi. The book would not have
been completed without her kind support.
Thanks to my undergraduate and postgraduate students of the University School of
Information Technology, Guru Gobind Singh Indraprastha University for motivating me to
write this book. Their expectations, discussions and enthusiasm always become my strength
for continuous improvement in academic pursuit. I would also like to thank all researchers,
practitioners, software developers, testers and teachers whose views, ideas and techniques find
a place in this book. I am also grateful to Sandeep Kumar, Stenographer of Examination
Division of the University for typing the draft of the manuscript.
Lastly, I am thankful to Dr Pravin Chandra, Associate Professor, Delhi University, Dr
Jitendra Chabra, Associate Professor, National Institute of Technology, Kurukshetra, Dr
Arvinder Kaur, Associate Professor, Guru Gobind Singh Indraprastha University for their
valuable suggestions. My thanks are also due to Dr Chetna Tiwari, Assistant Professor,
University School of Humanities and Social Sciences, Guru Gobind Singh Indraprastha
University for reading a few chapters of the manuscript.
1
Introduction
What is software testing? Why do we need to test software? Can we live without testing? How
do we handle software bugs reported by the customers? Who should work hard to avoid
frustrations of software failures?
Such questions are endless. But we need to find answers to such burning questions. Software
organizations are regularly getting failure reports of their products and this number is increasing
day by day. All of us would appreciate that it is extremely disappointing for the developers to
receive such failure reports. The developers normally ask: how did these bugs escape unnoticed?
It is a fact that software developers are experiencing failures in testing their coded programs and
such situations are becoming threats to modern automated civilization. We cannot imagine a day
without using cell phones, logging on to the internet, sending e-mails, watching television and
so on. All these activities are dependent on software, and software is not reliable. The world has
seen many software failures and some were even fatal to human life.
Software Testing
off from Kourou, French Guiana. The design and development took ten long years with a cost
of $7 billion. An enquiry board was constituted to find the reasons of the explosion. The board
identified the cause and mentioned in its report that [LION96]: The failure of the Ariane 5
was caused by the complete loss of guidance and altitude information, 37 seconds after start of
the main engine ignition sequence (30 seconds after lift-off). This loss of information was due
to specification and design errors in the software of the inertial reference system. The extensive
reviews and tests carried out during the Ariane 5 development programme did not include
adequate analysis and testing of the inertial reference system or of the complete flight control
system, which could have detected the potential failure. A software fault in the inertial
reference system was identified as a reason for the explosion by the enquiry committee. The
inertial reference system of the rocket had tried to convert 64 bit floating point number of
horizontal velocity to a 16 bit signed integer. However, the number was greater than 32,767
(beyond the permissible limit of 16 bit machine) and the conversion process failed.
Unfortunately, the navigation system of Ariane 4 was used in Ariane 5 without proper
testing and analysis. The Ariane 5 was a high speed rocket with higher value of an internal
alignment function, known as horizontal bias. This value is for the calculation of horizontal
velocity. On the day of the explosion, this value was more than expectations due to different
trajectory of this rocket as compared to the Ariane 4. Therefore, the main technical reason was
the conversion problem at the time of converting the horizontal bias variable, and this resulted
into the shutdown of the computer of the inertial reference system. When the computer shut
down, it passed control to an identical, redundant unit, which was there to provide backup in
case of such a failure. But the second unit had failed in the identical manner before a few
milliseconds. Why wouldnt it be? It was running the same software.
The designers never thought that the particular velocity value would ever be large enough
to cause trouble. After all, it never had been before. Unfortunately Ariane 5 was a faster rocket
than Ariane 4. Moreover, the calculation containing the error, which shut down the computer
system, actually served no purpose, once the rocket was in the air. Its only function was to
align the system before launch. So it should have been turned off. But designers chose long
ago, in an earlier version of the Ariane 4, to leave this function running for the first forty
seconds of flight a special feature meant to make the restart of the system easy in the event
of a brief hold in the countdown. Such design decisions and poor testing resulted in the
explosion of Ariane 5.
Introduction 3
Software Testing
is widespread dissatisfaction over the quality of financial software. If a system gives information
in the incorrect format, it may have an adverse impact on customer satisfaction.
Introduction 5
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
/*SOURCE CODE*/
#include<stdio.h>
#include<limits.h>
#include<conio.h>
void Minimum();
void main()
{
Minimum();
}
void Minimum()
{
int array[100];
int Number;
int i;
int tmpData;
int Minimum=INT_MAX;
clrscr();
"printf("Enter the size of the array:");
scanf("%d",&Number);
for(i=0;i<Number;i++) {
printf("Enter A[%d]=",i+1);
scanf("%d",&tmpData);
tmpData=(tmpData<0)?-tmpData:tmpData;
array[i]=tmpData;
}
i=1;
while(i<Number-1) {
if(Minimum>array[i])
{
Minimum=array[i];
}
i++;
}
printf("Minimum = %d\n", Minimum);
getch();
}
Figure 1.1. Program Minimum to find the smallest integer out of a set of integers
Software Testing
Set of Integers
Expected
Output
Observed
Output
Match?
Size
Inputs
1.
6, 9, 2, 16, 19
Yes
2.
Yes
3.
16
16
Yes
4.
21
21
Yes
5.
88
88
Yes
6.
21
21
Yes
7.
6, 2, 9, 5
Yes
8.
99, 21, 7, 49
Yes
There are 8 sets of inputs in Table 1.1. We may feel that these 8 test cases are sufficient for
such a trivial program. In all these test cases, the observed output is the same as the expected
output. We may also design similar test cases to show that the observed output is matched with
the expected output. There are many definitions of testing. A few of them are given below:
(i) Testing is the process of demonstrating that errors are not present.
(ii) The purpose of testing is to show that a program performs its intended functions
correctly.
(iii) Testing is the process of establishing confidence that a program does what it is supposed
to do.
The philosophy of all three definitions is to demonstrate that the given program behaves as
per specifications. We may write 100 sets of inputs for the program Minimum and show that
this program behaves as per specifications. However, all three definitions are not correct. They
describe almost the opposite of what testing should be viewed as. Forgetting the definitions for
the moment, whenever we want to test a program, we want to establish confidence about the
correctness of the program. Hence, our objective should not be to show that the program works
as per specifications. But, we should do testing with the assumption that there are faults and
our aim should be to remove these faults at the earliest. Thus, a more appropriate definition is
[MYER04]: Testing is the process of executing a program with the intent of finding
faults. Human beings are normally goal oriented. Thus, establishment of a proper objective
is essential for the success of any project. If our objective is to show that a program has no
errors, then we shall sub-consciously work towards this objective. We shall intend to choose
those inputs that have a low probability of making a program fail as we have seen in Table 1.1,
where all inputs are purposely selected to show that the program is absolutely correct. On the
contrary, if our objective is to show that a program has errors, we may select those test cases
which have a higher probability of finding errors. We shall focus on weak and critical portions
of the program to find more errors. This type of testing will be more useful and meaningful.
We again consider the program Minimum (given in Figure 1.1) and concentrate on some
typical and critical situations as discussed below:
(i)
(ii)
Introduction 7
(iii) A list where the minimum element is the first or last element.
(iv) A list where the minimum element is negative.
(v) A list where all elements are negative.
(vi) A list where some elements are real numbers.
(vii) A list where some elements are alphabetic characters.
(viii) A list with duplicate elements.
(ix) A list where one element has a value greater than the maximum permissible limit of an
integer.
We may find many similar situations which may be very challenging and risky for this
program and each such situation should be tested separately. In Table 1.1, we have selected
elements in every list to cover essentially the same situation: a list of moderate length,
containing all positive integers, where the minimum is somewhere in the middle. Table 1.2
gives us another view of the same program Minimum and the results are astonishing to
everyone. It is clear from the outputs that the program has many failures.
Table 1.2. Some critical/typical situations of the program Minimum
S. No.
Case 1
A very short list
with size 1, 2 or 3
Case 2
An empty list, i.e.
of size 0
Case 3
A list where the
minimum element
element
Case 4
A list where the
minimum element
is negative
Case 5
A list where all
elements are
negative
Case 6
A list where some
elements are real
numbers
Inputs
Set of Integers
Expected
Output
Observed Output
Match?
Size
A
B
C
D
E
F
1
2
2
3
3
3
90
12, 10
10, 12
12, 14, 36
36, 14, 12
14, 12, 36
90
10
10
12
12
12
2147483647
2147483647
2147483647
14
14
12
No
No
No
No
No
Yes
Error
message
2147483647
No
A
B
5
5
10
10
23
23
No
No
A
B
4
4
10, 2, 5, 23
5, 25, 20, 36
2
25
2
20
No
No
78
31
No
203
56
No
6.9
No
5.4
2, 3, 5, 6, 9
34 (The program
does not take values
for index 3,4 and 5)
858993460 (The
program does not
take any array value)
No
(Contd.)
Software Testing
(Contd.)
S. No.
Inputs
Set of Integers
Size
Case 7
A list where some
elements are
characters
Case 8
A list with duplicate elements
Case 9
A list where one
element has a
value greater than
the maximum
permissible limit
of an integer
Expected
Output
Observed Output
Match?
No
1I
2, 3, 4, 9, 6, 5,
11, 12, 14, 21, 22
2147483647
(Program does not
take any other index
value)
No
A
B
5
5
3, 4, 6, 9, 6
13, 6, 6, 9, 15
3
6
4
6
No
Yes
530,
4294967297, 23,
46, 59
23
No
What are the possible reasons for so many failures shown in Table 1.3? We should read our
program Minimum (given in Figure 1.1) very carefully to find reasons for so many failures.
The possible reasons of failures for all nine cases discussed in Table 1.2 are given in Table 1.3.
It is clear from Table 1.3 that this program suffers from serious design problems. Many
important issues are not handled properly and therefore, we get strange observed outputs. The
causes of getting these particular values of observed outputs are given in Table 1.4.
Table 1.3. Possible reasons of failures for all nine cases
S. No.
Case 1
A very short list with size 1, 2 or 3
Possible Reasons
and/or end value of the index of the usable array has not
been handled properly (see line numbers 22 and 23).
Case 2
An empty list i.e. of size 0
Case 3
A list where the minimum element is
Case 4
A list where the minimum element is
negative
Case 5
Introduction 9
(Contd.)
S. No.
Possible Reasons
Case 6
A list where some elements are real
numbers
Case 7
A list where some elements are
alphabetic characters
Case 8
A list with duplicate elements
Case 9
A list with one value greater than
the maximum permissible limit of an
integer
Observed Output
1 (a)
1 (b)
1 (c)
1 (d)
1 (e)
1 (f)
2147483647
2147483647
2147483647
14
14
12
2 (a)
2147483647
3 (a)
3 (b)
4 (a)
4 (b)
5 (a)
5 (b)
6 (a)
23
23
2
20
31
56
34
6 (b)
7 (a)
858993460
2
7 (b)
2147483647
8 (a)
8 (b)
9 (a)
Remarks
the maximum value of a 32 bit integer to which a variable minimum
is initialized.
middle value is 14.
Fortunately, the middle value is the minimum value and thus the
result is correct.
The maximum value of a 32 bit integer to which a variable minimum
is initialized.
value 23 is the minimum value in the remaining list.
converted negative integer(s) to positive integer(s).
Same as Case 4.
After getting . of 34.56, the program was terminated and 34 was
displayed. However, the program has also ignored 12, being the
Garbage value.
After getting I in the second index value 2I, the program
terminated abruptly and displayed 2.
The input has a non digit value. The program displays the value to
which variable minimum is initialized.
minimum in the remaining list.
values are ignored.
signed integer data type used in the program.
10
Software Testing
(ii)
The program has ignored the first and last values of the list
The program is not handling the first and last values of the list properly. If we see the
line numbers 22 and 23 of the program, we will identify the causes. There are two
faults. Line number 22 i = 1; should be changed to i = 0; in order to handle the first
value of the list. Line number 23 while (i<Number -1) should be changed to while
(i<=Number-1) in order to handle the last value of the list.
The program proceeds without checking the size of the array
If we see line numbers 14 and 15 of the program, we will come to know that the program
is not checking the size of the array / list before searching for the minimum value. A list
cannot be of zero or negative size. If the user enters a negative or zero value of size or value
greater than the size of the array, an appropriate message should be displayed. Hence after
line number 15, the value of the size should be checked as under:
if (Number < = 0||Number>100)
{
printf ("Invalid size specied");
}
If the size is greater than zero and lesser than 101, then the program should proceed
further, otherwise it should be terminated.
(iii) Program has converted negative values to positive values
Line number 19 is converting all negative values to positive values. That is why the
program is not able to handle negative values. We should delete this line to remove
this fault.
The modified program, based on the above three points is given in Figure 1.2. The nine
cases of Table 1.2 are executed on this modified program and the results are given in
Table 1.5.
LINE NUMBER
1.
2.
3.
4.
5.
6.
7.
8.
9.
/*SOURCE CODE*/
#include<stdio.h>
#include<limits.h>
#include<conio.h>
void Minimum();
void main()
{
Minimum();
}
void Minimum()
{
int array[100];
int Number;
(Contd.)
Introduction 11
(Contd.)
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
int i;
int tmpData;
int Minimum=INT_MAX;
clrscr();
printf("Enter the size of the array:");
scanf("%d",&Number);
if(Number<=0||Number>100) {
printf("Invalid size specied");
}
else {
for(i=0;i<Number;i++) {
printf("Enter A[%d]=",i+1);
scanf("%d",&tmpData);
/*tmpData=(tmpData<0)?-tmpData:tmpData;*/
array[i]=tmpData;
}
i=0;
while(i<=Number-1) {
if(Minimum>array[i])
{
Minimum=array[i];
}
i++;
}
printf("Minimum = %d\n", Minimum);
}
getch();
}
Figure 1.2. Modified program Minimum to find the smallest integer out of a set of integers
Table 1.5 gives us some encouraging results. Out of 9 cases, only 3 cases are not matched.
Six cases have been handled successfully by the modified program given in Figure 1.2. The
cases 6 and 7 are failed due to the scanf() function parsing problem. There are many ways to
handle this problem. We may design a program without using scanf() function at all.
However, scanf() is a very common function and all of us use it frequently. Whenever any
value is given using scanf() which is not as per specified format, scanf() behaves very
notoriously and gives strange results. It is advisable to display a warning message for the user
before using the scanf() function. The warning message may compel the user to enter values
in the specified format only. If the user does not do so, he/she may have to suffer the
consequences accordingly. The case 9 problem is due to the fixed maximal size of the
integers in the machine and the language used. This also has to be handled through a warning
message to the user. The further modified program based on these observations is given in
the Figure 1.3.
12
Software Testing
Table 1.5
Inputs
Sr. No.
Size
Set of Integers
Expected
Output
Observed
Output
Match?
Case 1
A very short list with
size 1, 2 or 3
A
B
C
D
E
F
1
2
2
3
3
3
90
12, 10
10, 12
12, 14, 36
36, 14, 12
14, 12, 36
90
10
10
12
12
12
90
10
10
12
12
12
Yes
Yes
Yes
Yes
Yes
Yes
Case 2
An empty list, i.e. of
size 0
Error
message
Error
message
Yes
Case 3
A list where the minimum element is the
10
10
Yes
10
10
Yes
10, 2, 5, 23
Yes
5, 25, 20, 36
25
25
Yes
78
78
Yes
203
203
Yes
6.9
34
No
5.4
858993460
No
No
1I
2, 3, 4, 9, 6, 5, 11,
12, 14, 21, 22
858993460
No
A
B
5
5
3,4,6,9, 6
13, 6, 6, 9, 15
3
6
3
6
Yes
Yes
530, 42949672
97, 23, 46, 59
23
No
Case 4
A list where the
minimum element is
negative
Case 5
A list where all
elements are
negative
Case 6
A list where some
elements are real
numbers
Case 7
A list where some elements are alphabetic
characters
Case 8
A list with duplicate
elements
Case 9
A list where one
element has a value
greater than the maximum permissible limit
of an integer
Introduction 13
LINE NUMBER
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
/*SOURCE CODE*/
#include<stdio.h>
#include<limits.h>
#include<conio.h>
void Minimum();
void main()
{
Minimum();
}
void Minimum()
{
int array[100];
int Number;
int i;
int tmpData;
int Minimum=INT_MAX;
clrscr();
printf("Enter the size of the array:");
scanf("%d",&Number);
if(Number<=0||Number>100) {
printf("Invalid size specied");
}
else {
printf("Warning: The data entered must be a valid integer and
must be between %d to %d, INT_MIN, INT_MAX\n");
for(i=0;i<Number;i++) {
printf("Enter A[%d]=",i+1);
scanf("%d",&tmpData);
/*tmpData=(tmpData<0)?-tmpData:tmpData;*/
array[i]=tmpData;
}
i=0;
while(i<=Number-1) {
if(Minimum>array[i])
{
Minimum=array[i];
}
i++;
}
printf("Minimum = %d\n", Minimum);
}
getch();
}
Figure 1.3. Final program Minimum to find the smallest integer out of a set of integers
14
Software Testing
Our goal is to find critical situations of any program. Test cases shall be designed for every
critical situation in order to make the program fail in such situations. If it is not possible to remove
a fault then proper warning messages shall be given at proper places in the program. The aim of
the best testing person should be to fix most of the faults. This is possible only if our intention is
to show that the program does not work as per specifications. Hence, as given earlier, the most
appropriate definition is Testing is the process of executing a program with the intent of
finding faults. Testing never shows the absence of faults, but it shows that the faults are present
in the program.
Introduction 15
a life critical systems software. The world has seen many failures and these failures have been
costly to the software companies.
The fact is that we are releasing software that is full of errors, even after doing sufficient
testing. No software would ever be released by its developers if they are asked to certify that
the software is free of errors. Testing, therefore, continues to the point where it is considered
that the cost of testing processes significantly outweighs the returns.
16
Software Testing
The testing persons must be cautious, curious, critical but non-judgmental and good
communicators. One part of their job is to ask questions that the developers might not be able
to ask themselves or are awkward, irritating, insulting or even threatening to the developers.
Some of the questions are [BENT04]:
(i) How is the software?
(ii) How good is it?
(iii) How do you know that it works? What evidence do you have?
(iv) What are the critical areas?
(v) What are the weak areas and why?
(vi) What are serious design issues?
(vii) What do you feel about the complexity of the source code?
The testing persons use the software as heavily as an expert user on the customer side. User
testing almost invariably recruits too many novice users because they are available and the
software must be usable by them. The problem is that the novices do not have domain
knowledge that the expert users have and may not recognize that something is wrong.
Many companies have made a distinction between development and testing phases by
making different people responsible for each phase. This has an additional advantage. Faced
with the opportunity of testing someone elses software, our professional pride will demand
that we achieve success. Success in testing is finding errors. We will therefore strive to reveal
any errors present in the software. In other words, our ego would have been harnessed to the
testing process, in a very positive way, in a way, which would be virtually impossible, had we
been testing our own software [NORM89]. Therefore, most of the times, the testing persons
are different from development persons for the overall benefit of the system. The developers
provide guidelines during testing; however, the overall responsibility is owned by the persons
who are involved in testing. Roles of the persons involved during development and testing are
given in Table 1.6.
Table 1.6. Persons and their roles during development and testing
S. No.
Persons
1.
Customer
2.
Project Manager
3.
Software Developer(s)
4.
Testing co-ordinator(s)
5.
Testing person(s)
Roles
Provides funding, gives requirements, approves changes and
some test results.
Plans and manages the project.
Designs, codes and builds the software; participates in source
ments and functional and technical documents.
Executes the tests and documents results.
Introduction 17
required (possible only with automated testing) to execute one set of inputs, it may take 18
hours to test all possible combinations of inputs. Here, invalid test cases are not considered
which may also require a substantial amount of time. In practice, inputs are more than two and
the size is also more than 8 bits. What will happen when inputs are real and imaginary
numbers? We may wish to go for complete testing of the program, which is neither feasible
nor possible. This situation has made this area very challenging where the million dollar
question is, How to choose a reasonable number of test cases out of a large pool of test
cases? Researchers are working very hard to find the answer to this question. Many testing
techniques attempt to provide answers to this question in their own ways. However, we do not
have a standard yardstick for the selection of test cases.
We all know the importance of this area and expect some drastic solutions in the future. We
also know that every project is a new project with new expectations, conditions and constraints.
What is the bottom line for testing? At least, we may wish to touch this bottom line, which may
incorporate the following:
(i) Execute every statement of the program at least once.
(ii) Execute all possible paths of the program at least once.
(iii) Execute every exit of the branch statement at least once.
This bottom line is also not easily achievable. Consider the following piece of source code:
1.
2.
3.
4.
5.
6.
7.
8.
if (x > 0)
{
a = a + b;
}
if (y>10)
{
c=c+d;
}
Line Numbers
1
2, 3, 4
5
6, 7, 8
End
A
B
C
D
E
18
Software Testing
The possible paths are: ACE, ABCE, ACDE and ABCDE. However, if we choose x = 9 and
y = 15, all statements are covered. Hence only one test case is sufficient for 100% statement
coverage by traversing only one path ABCDE. Therefore, 100% statement coverage may not
be sufficient, even though that may be difficult to achieve in real life programs.
Myers [MYER04] has given an example in his book entitled The art of software testing
which shows that the number of paths is too large to test. He considered a control flow graph
(as given in Figure 1.5) of a 10 to 20 statement program with DO Loop that iterates up to 20
times. Within DO Loop there are many nested IF statements. The assumption is that all
decisions in the program are independent of each other. The number of unique paths is nothing
but the number of unique ways to move from point X to point Y. Myers further stated that
executing every statement of the program at least once may seem to be a reasonable goal.
However many portions of the program may be missed with this type of criteria.
The total number of paths is approximately 1014 or 100 trillion. It is computed from
5 + 519 + 51 , where 5 is the number of independent paths of the control flow graph.
If we write, execute and verify a test case every five minutes, it would take approximately one
billion years to try every path. If we are 300 times faster, completing a test case one per second,
we could complete the job in 3.2 million years. This is an extreme situation; however, in
reality, all decisions are not independent. Hence, the total paths may be less than the calculated
paths. But real programs are much more complex and larger in size. Hence, testing all paths
is very difficult if not impossible to achieve.
We may like to test a program for all possible valid and invalid inputs and furthermore, we
may also like to execute all possible paths; but practically, it is quite difficult. Every exit
condition of a branch statement is similarly difficult to test due to a large number of such
conditions. We require effective planning, strategies and sufficient resources even to target the
minimum possible bottom line. We should also check the program for very large numbers,
very small numbers, numbers that are close to each other, negative numbers, some extreme
cases, characters, special letters, symbols and some strange cases.
20
Introduction 19
The program is a combination of source code and object code. Every phase of the software
development life cycle requires preparation of a few documentation manuals which are shown
in Figure 1.7. These are very helpful for development and maintenance activities.
20
Software Testing
Operating procedure manuals consist of instructions to set up, install, use and to maintain
the software. The list of operating procedure manuals / documents is given in Figure 1.8.
System overview
Installation guide
Reference guide
System administration
guide
Beginners guide
tutorial
Maintenance guide
Terminology and
help manual
Introduction 21
1.
Output(s) :
2.
Post-condition(s) :
(optional)
(Contd.)
22
Software Testing
(Contd.)
3.
4.
5.
6.
7.
The set of test cases is called a test suite. We may have a test suite of all test cases, test suite
of all successful test cases and test suite of all unsuccessful test cases. Any combination of test
cases will generate a test suite. All test suites should be preserved as we preserve source code
and other documents. They are equally valuable and useful for the purpose of maintenance of
the software. Sometimes test suite of unsuccessful test cases gives very important information
because these are the test cases which have made the program fail in the past.
Introduction 23
common practice. The company gets the feedback of many potential customers without
making any payment. The other good thing is that the reputation of the company is not at stake
even if many failures are encountered.
24
Software Testing
This, if started in the early phases of the software development, gives good results at a very
reasonable cost. Dynamic testing refers to executing the source code and seeing how it
performs with specific inputs. All validation activities come in this category where execution
of the program is essential.
These six objectives are impossible to achieve due to time and resource constraints as
discussed in section 1.2.4. We may achieve a few of them. If we do any compromise, we may
miss a bug. Input domain is too large to test and there are too many paths in any program.
Hence Everything is impossible and we have to settle for less than everything in real life
situations. Some of the other issues which further make the situation more complex and
complicated are given in the subsequent sub-sections.
Introduction 25
been d = ++c; but due to a typographical mistake and ignorance, d = c++; has been written.
This is a logical error and cannot be detected by the compiler. Here, confusion is due to the use
of prefix and postfix operators. A prefix operator first adds 1 to the operand and then the result
is assigned to the variable on the left. On the other hand, a postfix operator first assigns the
value to the variable on the left and then increment the operand [BALA07]. In this function the
postfix operator is used instead of the prefix operator. The function returns the integer value of
flag. If this function is executed on a 16 bit computer, the valid integer range for input c is
32768 to 32767. Hence, there are 65536 possible inputs to this program. We may not like to
create 65536 test cases. After all, who will execute those cases, if at all created, one fine day?
Which input values are to be selected for the detection of this bug? Ten test cases have been
given in Table 1.8 and none of them could detect this bug. How many test cases out of possible
65536 test cases will find this bug? What are the chances that we will select all those test cases
or any one of them in order to find this bug? Only two test cases out of 65536 can detect this
bug and are given in Table 1.9. This example shows the impossibility of testing everything.
If a small function can create so many problems, we may appreciate the problems of real life
large and complex programs. Logical bugs are extremely difficult to handle and become one
of the serious concerns of testing.
Software testing has inherent difficulties which is making it impossible to completely test
the software. It can only show that bugs are in the software but it cannot show that bugs are
not in the software at all. With all the limitations, software testing still is mandatory and a very
useful filter to detect errors, which may further be removed. However we all know that good
testing cannot make the software better, only good coding with software engineering principles
makes the software better. However, good testing techniques may detect a good number of
errors and their removal may improve the quality of the software.
1. int funct1 (int c)
2. {
3. int d, flag;
4. d = c ++ ; // should be d = ++ c; as per requirements
5. if (d < 20000)
6.
flag = 1 ;
7. else
8.
flag = 0;
9. return (flag);
10. }
Figure 1.9. A typical example
Input c
0
1
20000
30000
Expected output
1
1
0
0
Actual output
1
1
0
0
(Contd.)
26
Software Testing
(Contd.)
Test case
5.
6.
7.
8.
9.
10.
Input c
10000
20000
1
16000
27000
32000
Expected output
1
1
1
1
0
0
Actual output
1
1
1
1
0
0
Expected output
Actual output
19999
32767
Introduction 27
These verification activities are treated as error preventive exercises and are applied at
requirements analysis and specification phase, high level design phase, detailed design phase
and implementation phase. We not only want to improve the quality of the end products at all
phases by reviews, inspections and walkthroughs, but also want to design test cases and test
plans during these phases. The designing of test cases after requirement analysis and
specification phase, high level design phase, detailed design phase and implementation phase
may help us to improve the quality of the final product and also reduce the cost and development
time.
28
Software Testing
phase. Similarly the system test case design and planning activities should be carried out along with
high level design phase. Unit and integration test case design and planning activities should be
carried out along with the detailed design phase. The development work is to be done by the
development team and testing is to be done by the testing team simultaneously. After the completion
of implementation, we will have the required test cases for every phase of testing. The only
remaining work is to execute these test cases and observe the responses of the outcome of the
execution. This model brings the quality into the development of our products. The encouragement
of writing test cases and test plans in the earlier phases of the software development life cycle is the
real strength of this model. We require more resources to implement this model as compared to the
waterfall model. This model also suffers from many disadvantages of the waterfall model like nonavailability of a working version of the product until late in the life cycle, difficulty in accommodating
any change, etc. This model has also limited applications in todays interactive software
processes.
Introduction 29
30
Software Testing
Introduction 31
32
Software Testing
Introduction 33
(a) Cautious
(b) Curious
(c) Judgmental
(d) Critical
1.42 What should be the best possible objective for testing?
(a) Execute every statement at least once
(b) Execute every path at least once
(c) Execute every branch statement at least once
(d) Execute every condition of a branch statement at least once
1.43 Which is not a user manual?
(a) Reference guide
(b) Beginners guide
(c) Sequence diagrams
(d) System overview
1.44 Which is not a documentation manual?
(a) SRS document
(b) SDD document
(c) Source code
(d) Installation guide
1.45 Which is not the limitation of testing?
(a) Difficult to measure the progress of testing
(b) Availability of testing tools
(c) Input domain is too large to test
(d) Too many paths in the program
1.46 How much percentage of cost is generally consumed in software testing with reference
to software development cost?
(a) 10 20
(b) 40 50
(c) 80 90
(d) 70 80
1.47 How much testing is enough?
(a) Not easy to decide
(b) Depends on complexity and criticality
(c) Depends on abilities of testing persons
(d) Depends on maturity of developers
1.48 If an expected output is not specified then:
(a) We cannot execute the test case
(b) We may not be able to repeat the test
(c) We may not be able to decide if the test has passed or failed
(d) We may not be able to automate the testing activity
1.49 Which of the following is a reason for a software failure?
(a) Testing fault
(b) Software Fault
(c) Design Fault
(d) Requirement Fault
34
Software Testing
EXERCISES
1.1
1.2
1.3
1.4
Introduction 35
1.17 Explain a typical test case template. What are the reasons for documenting test cases?
1.18 With the help of a suitable example, illustrate why exhaustive testing is not possible.
1.19 Define a test case. What are the objectives of test case design? Discuss the various
steps involved.
1.20 What is the role of Quality Assurance in software development? How is it different
from Quality Control?
1.21 What is software crisis? Was Y2K a software crisis?
1.22 What are the components of a software system? Discuss how a software differs from
a program.
1.23 Differentiate between generic and customized software products. Which one has a
large market share and why?
1.24 What is a software failure? Discuss the conditions of a failure. Mere presence of faults
may not lead to failures. Explain with the help of an example.
1.25 Verification and validation are used interchangeably many times. Define these terms
and establish their relationship with testing.
1.26 Testing is not a single phase in the software development life cycle. Explain and
comment.
1.27 Discuss the advantages of testing with reference to the software product.
1.28 Discuss the significance of the V-shaped software life cycle model and also establish
the relationship between its development and testing parts.
1.29 What is the relationship of the V-shaped software life cycle model with the waterfall
model? How is acceptance testing related to requirement analysis and specification
phase?
1.30 Differentiate between the V-shaped software life cycle model and the waterfall model.
FURTHER READING
The classic book on software testing which was the only book of note for years is:
G.J. Myers, The Art of Software Testing, John Wiley and Sons, Inc., 1977.
One of the first articles that describes the changes in growth of software testing is:
D. Gelperin, B. Hetzel, The Growth of Software Testing, Communications of
the ACM, vol. 31, no. 6, June 1988.
Read the record report of Prof. J.L. Lions, Chairman Inquiry Board and Director
General of ESA prepared on 19th July, 1996 for Identification of the Causes of Failure
of Ariane 5:
J.L. Lions, Ariane 5 Flight 501 Failure, http://esamultimedia.esa.int/docs/esax-1819eng.pdf, July 19, Paris, 1996.
Many good books and articles have been written on the causes of software failure,
including:
S.A. Sherer, Software Failure Risk, Plenum, 1992.
P. Neumann, Computer Related Risks, Addison Wesley, 1995.
S. Flowers, Software Failure: Management Failure, John Wiley and Sons,
1996.
36
Software Testing
2
Functional Testing
Software testing is very important but is an effort-consuming activity. A large number of test
cases are possible and some of them may make the software fail. As we all know, if observed
behaviour of the software is different from the expected behaviour, we treat this as a failure
condition. Failure is a dynamic condition that always occurs after the execution of the software.
Everyone is in search of such test cases which may make the software fail and every technique
attempts to find ways to design those test cases which have a higher probability of showing a
failure.
Functional testing techniques attempt to design those test cases which have a higher probability
of making a software fail. These techniques also attempt to test every possible functionality of
the software. Test cases are designed on the basis of functionality and the internal structure of the
program is completely ignored. Observed output(s) is (are) compared with expected output(s) for
selected input(s) with preconditions, if any. The software is treated as a black box and therefore,
it is also known as black box testing as shown in Figure 2.1.
Every dot in the input domain represents a set of inputs and every dot in the output domain
represents a set of outputs. Every set of input(s) will have a corresponding set of output(s). The
test cases are designed on the basis of user requirements without considering the internal
structure of the program. This black box knowledge is sufficient to design a good number of
test cases. Many activities are performed in real life with only black box knowledge like
38
Software Testing
driving a car, using a cell phone, operating a computer, etc. In functional testing techniques,
execution of a program is essential and hence these testing techniques come under the category
of validation. Here, both valid and invalid inputs are chosen to see the observed behaviour of
the program. These techniques can be used at all levels of software testing like unit, integration,
system and acceptance testing. They also help the tester to design efficient and effective test
cases to find faults in the software.
Minimum value
Just above minimum value
Maximum value
Just below maximum value
Nominal (Average) value
These values are shown in Figure 2.2 for the program Square.
These five values (1, 2, 50, 99 and 100) are selected on the basis of boundary value analysis
and give reasonable confidence about the correctness of the program. There is no need to select
all 100 inputs and execute the program one by one for all 100 inputs. The number of inputs
selected by this technique is 4n + 1 where n is the number of inputs. One nominal value is
selected which may represent all values which are neither close to boundary nor on the
boundary. Test cases for Square program are given in Table 2.1.
Table 2.1. Test cases for the Square program
Test Case
1.
2.
3.
4.
5.
Input x
1
2
50
99
100
Expected output
1
4
2500
9801
10000
Functional Testing
39
Consider a program Addition with two input values x and y and it gives the addition of x
and y as an output. The range of both input values are given as:
100
300
200
400
The x and y inputs are required for the execution of the program. The input domain of
this program Addition is shown in Figure 2.4. Any point within the inner rectangle is a
legitimate input to the program.
We also consider single fault assumption theory of reliability which says that failures are
rarely the result of the simultaneous occurrence of two (or more) faults. Normally, one fault is
responsible for one failure. With this theory in mind, we select one input value on boundary
(minimum), just above boundary (minimum +), just below boundary (maximum ), on boundary
40
Software Testing
(maximum), nominal (average) and other n-1 input values as nominal values. The inputs are
shown graphically in Figure 2.5 and the test cases for Addition program are given in Table
2.2.
Expected Output
1.
100
300
400
2.
101
300
401
3.
200
300
500
4.
299
300
599
5.
300
300
600
6.
200
200
400
7.
200
201
401
8.
200
300
500
9.
200
399
599
10.
200
400
600
In Table 2.2, two test cases are common (3 and 8), hence one must be selected. This
technique generates 9 test cases where all inputs have valid values. Each dot of the Figure 2.5
represents a test case and inner rectangle is the domain of legitimate input values. Thus, for a
program of n variables, boundary value analysis yields 4n + 1 test cases.
Example 2.1: Consider a program for the determination of the largest amongst three numbers.
Its input is a triple of positive integers (say x,y and z) and values are from interval [1, 300].
Design the boundary value test cases.
Functional Testing
41
Solution: The boundary value test cases are given in Table 2.3.
Table 2.3.
Test Case
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
x
1
2
150
299
300
150
150
150
150
150
150
150
150
y
150
150
150
150
150
1
2
299
300
150
150
150
150
z
150
150
150
150
150
150
150
150
150
1
2
299
300
Expected output
150
150
150
299
300
150
150
299
300
150
150
299
300
Example 2.2: Consider a program for the determination of division of a student based on the
marks in three subjects. Its input is a triple of positive integers (say mark1, mark2, and mark3)
and values are from interval [0, 100].
The division is calculated according to the following rules:
Marks Obtained
(Average)
Division
75 100
60 74
50 59
40 49
0 39
Total marks obtained are the average of marks obtained in the three subjects i.e.
Average = (mark1 + mark 2 + mark3) / 3
The program output may have one of the following words:
[Fail, Third Division, Second Division, First Division, First Division with Distinction]
Design the boundary value test cases.
Solution: The boundary value test cases are given in Table 2.4.
Table 2.4. Boundary value test cases for the program determining the division of a student
Test Case
mark1
mark2
mark3
Expected Output
1.
50
50
Fail
2.
50
50
Fail
3.
50
50
50
Second Division
4.
99
50
50
First Division
5.
100
50
50
First Division
(Contd.)
42
Software Testing
(Contd.)
Test Case
6.
7.
8.
9.
10.
11.
12.
13.
mark1
50
50
50
50
50
50
50
50
mark2
0
1
99
100
50
50
50
50
mark3
50
50
50
50
0
1
99
100
Expected Output
Fail
Fail
First Division
First Division
Fail
Fail
First Division
First Division
Example 2.3: Consider a program for classification of a triangle. Its input is a triple of
positive integers (say a, b, c) and the input parameters are greater than zero and less than or
equal to 100.
The triangle is classified according to the following rules:
Right angled triangle: c2 = a2 + b2 or a2 = b2 + c2 or b2 = c2 + a2
Obtuse angled triangle: c2 > a2 + b2 or a2 > b2 + c2 or b2 > c2 + a2
Acute angled triangle: c2 < a2 + b2 and a2 < b2 + c2 and b2 < c2 + a2
The program output may have one of the following words:
[Acute angled triangle, Obtuse angled triangle, Right angled triangle, Invalid triangle]
Design the boundary value test cases.
Solution: The boundary value analysis test cases are given in Table 2.5.
Table 2.5.
Test Case
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
a
1
2
50
99
100
50
50
50
50
50
50
50
50
b
50
50
50
50
50
1
2
99
100
50
50
50
50
c
50
50
50
50
50
50
50
50
50
1
2
99
100
Expected Output
Acute angled triangle
Acute angled triangle
Acute angled triangle
Obtuse angled triangle
Invalid triangle
Acute angled triangle
Acute angled triangle
Obtuse angled triangle
Invalid triangle
Acute angled triangle
Acute angled triangle
Obtuse angled triangle
Invalid triangle
Example 2.4: Consider a program for determining the day of the week. Its input is a triple of
day, month and year with the values in the range
Functional Testing
month
day
1900
43
12
31
year
2058
The possible outputs would be the day of the week or invalid date. Design the boundary
value test cases.
Solution: The boundary value test cases are given in Table 2.6.
Table 2.6. Boundary value test cases for the program determining the day of the week
Test Case
month
day
year
Expected Output
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
1
2
6
11
12
6
6
6
6
6
6
6
6
15
15
15
15
15
1
2
30
31
15
15
15
15
1979
1979
1979
1979
1979
1979
1979
1979
1979
1900
1901
2057
2058
Monday
Thursday
Friday
Thursday
Saturday
Friday
Saturday
Saturday
Invalid Date
Friday
Saturday
Friday
Saturday
44
Software Testing
Table 2.7.
Test Case
Expected Output
1.
99
300
Invalid Input
2.
100
300
400
3.
101
300
401
4.
200
300
500
5.
299
300
599
6.
300
300
600
7.
301
300
Invalid Input
8.
200
199
Invalid Input
9.
200
200
400
10.
200
201
401
11.
200
399
599
12.
200
400
600
13.
200
401
Invalid Input
Functional Testing
45
Expected Output
1.
100
200
300
2.
100
201
301
3.
100
300
400
4.
100
399
499
5.
100
400
500
6.
101
200
301
7.
101
201
302
8.
101
300
401
9.
101
399
500
10.
101
400
501
11.
200
200
400
12.
200
201
401
13.
200
300
500
14.
200
399
599
(Contd.)
46
Software Testing
(Contd.)
Test Case
Expected Output
15.
200
400
600
16.
299
200
499
17.
299
201
500
18.
299
300
599
19.
299
399
698
20.
299
400
699
21.
300
200
500
22.
300
201
501
23.
300
300
600
24.
300
399
699
25.
300
400
700
This is a more comprehensive technique and boundary value test cases are proper sub-sets
of worst case test cases. This requires more effort and is recommended in situations where
failure of the program is extremely critical and costly [JORG07].
Functional Testing
Table 2.9.
Test Case
Expected Output
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.
48.
49.
99
99
99
99
99
99
99
100
100
100
100
100
100
100
101
101
101
101
101
101
101
200
200
200
200
200
200
200
299
299
299
299
299
299
299
300
300
300
300
300
300
300
301
301
301
301
301
301
301
199
200
201
300
399
400
401
199
200
201
300
399
400
401
199
200
201
300
399
400
401
199
200
201
300
399
400
401
199
200
201
300
399
400
401
199
200
201
300
399
400
401
199
200
201
300
399
400
401
Invalid input
Invalid input
Invalid input
Invalid input
Invalid input
Invalid input
Invalid input
Invalid input
300
301
400
499
500
Invalid input
Invalid input
301
302
401
500
501
Invalid input
Invalid input
400
401
500
599
600
Invalid input
Invalid input
499
500
599
698
699
Invalid input
Invalid input
500
501
600
699
700
Invalid input
Invalid input
Invalid input
Invalid input
Invalid input
Invalid input
Invalid input
Invalid input
47
48
Software Testing
2.1.4 Applicability
Boundary value analysis is a simple technique and may prove to be effective when used
correctly. Here, input values should be independent which restricts its applicability in many
programs. This technique does not make sense for Boolean variables where input values are
TRUE and FALSE only, and no choice is available for nominal values, just above boundary
values, just below boundary values, etc. This technique can significantly reduce the number of
test cases and is suited to programs in which input values are within ranges or within sets. This
is equally applicable at the unit, integration, system and acceptance test levels. All we want is
input values where boundaries can be identified from the requirements.
Example 2.5: Consider the program for the determination of the largest amongst three
numbers as explained in example 2.1. Design the robust test cases and worst case test cases for
this program.
Solution: The robust test cases and worst test cases are given in Table 2.10 and Table 2.11
respectively.
Table 2.10.
Test Case
Expected output
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
0
1
2
150
299
300
301
150
150
150
150
150
150
150
150
150
150
150
150
150
150
150
150
150
150
150
0
1
2
299
300
301
150
150
150
150
150
150
150
150
150
150
150
150
150
150
150
150
150
150
150
0
1
2
299
300
301
Invalid input
150
150
150
299
300
Invalid input
Invalid input
150
150
299
300
Invalid input
Invalid input
150
150
299
300
Invalid input
Table 2.11.
Test Case
Expected output
1.
2.
3.
1
1
1
1
1
1
1
2
150
1
2
150
(Contd.)
Functional Testing
49
(Contd.)
Test Case
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.
48.
49.
x
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
y
1
1
2
2
2
2
2
150
150
150
150
150
299
299
299
299
299
300
300
300
300
300
1
1
1
1
1
2
2
2
2
2
150
150
150
150
150
299
299
299
299
299
300
300
300
300
z
299
300
1
2
150
299
300
1
2
150
299
300
1
2
150
299
300
1
2
150
299
300
1
2
150
299
300
1
2
150
299
300
1
2
150
299
300
1
2
150
299
300
1
2
150
299
Expected output
299
300
2
2
150
299
300
150
150
150
299
300
299
299
299
299
300
300
300
300
300
300
2
2
150
299
300
2
2
150
299
300
150
150
150
299
300
299
299
299
299
300
300
300
300
300
(Contd.)
50
Software Testing
(Contd.)
Test Case
50.
51.
52.
53.
54.
55.
56.
57.
58.
59.
60.
61.
62.
63.
64.
65.
66.
67.
68.
69.
70.
71.
72.
73.
74.
75.
76.
77.
78.
79.
80.
81.
82.
83.
84.
85.
86.
87.
88.
89.
90.
91.
92.
93.
94.
95.
x
2
150
150
150
150
150
150
150
150
150
150
150
150
150
150
150
150
150
150
150
150
150
150
150
150
150
299
299
299
299
299
299
299
299
299
299
299
299
299
299
299
299
299
299
299
299
y
300
1
1
1
1
1
2
2
2
2
2
150
150
150
150
150
299
299
299
299
299
300
300
300
300
300
1
1
1
1
1
2
2
2
2
2
150
150
150
150
150
299
299
299
299
299
z
300
1
2
150
299
300
1
2
150
299
300
1
2
150
299
300
1
2
150
299
300
1
2
150
299
300
1
2
150
299
300
1
2
150
299
300
1
2
150
299
300
1
2
150
299
300
Expected output
300
150
150
150
299
300
150
150
150
299
300
150
150
150
299
300
299
299
299
299
300
300
300
300
300
300
299
299
299
299
300
299
299
299
299
300
299
299
299
299
300
299
299
299
299
300
(Contd.)
Functional Testing
51
(Contd.)
Test Case
Expected output
96.
97.
98.
99.
100.
101.
102.
103.
104.
105.
106.
107.
108.
109.
110.
111.
112.
113.
114.
115.
116.
117.
118.
119.
120.
121.
122.
123.
124.
125.
299
299
299
299
299
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
1
1
1
1
1
2
2
2
2
2
150
150
150
150
150
299
299
299
299
299
300
300
300
300
300
1
2
150
299
300
1
2
150
299
300
1
2
150
299
300
1
2
150
299
300
1
2
150
299
300
1
2
150
299
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
300
Example 2.6: Consider the program for the determination of division of a student based on
marks obtained in three subjects as explained in example 2.2. Design the robust test cases and
worst case test cases for this program.
Solution: The robust test cases and worst test cases are given in Table 2.12 and Table 2.13
respectively.
Table 2.12.
Test Case
1.
2.
3.
4.
5.
6.
mark1
1
0
1
50
99
100
mark2
50
50
50
50
50
50
mark3
50
50
50
50
50
50
Expected Output
Invalid marks
Fail
Fail
Second Division
First Division
First Division
(Contd.)
52
Software Testing
(Contd.)
Test Case
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
mark1
101
50
50
50
50
50
50
50
50
50
50
50
50
mark2
50
1
0
1
99
100
101
50
50
50
50
50
50
mark3
50
50
50
50
50
50
50
1
0
1
99
100
101
Expected Output
Invalid marks
Invalid marks
Fail
Fail
First Division
First Division
Invalid marks
Invalid marks
Fail
Fail
First Division
First Division
Invalid Marks
Table 2.13. Worst case test cases for the program for determining the division of a student
Test Case
mark1
mark2
mark3
Expected Output
1
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
0
0
0
0
0
1
1
1
1
1
50
50
50
50
50
99
99
99
99
99
100
100
100
100
100
0
0
0
0
0
1
1
0
1
50
99
100
0
1
50
99
100
0
1
50
99
100
0
1
50
99
100
0
1
50
99
100
0
1
50
99
100
0
1
Fail
Fail
Fail
Fail
Fail
Fail
Fail
Fail
Fail
Fail
Fail
Fail
Fail
Third division
Second division
Fail
Fail
Third division
First division
First division
Fail
Fail
Second division
First division
First division
Fail
Fail
Fail
Fail
Fail
Fail
Fail
(Contd.)
Functional Testing
53
(Contd.)
Test Case
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.
48.
49.
50.
51.
52.
53.
54.
55.
56.
57.
58.
59.
60.
61.
62.
63.
64.
65.
66.
67.
68.
69.
70.
71.
72.
73.
74.
75.
76.
77.
78.
79.
80.
81.
82.
mark1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
99
99
99
99
99
99
99
mark2
1
1
1
50
50
50
50
50
99
99
99
99
99
100
100
100
100
100
0
0
0
0
0
1
1
1
1
1
50
50
50
50
50
99
99
99
99
99
100
100
100
100
100
0
0
0
0
0
1
1
mark3
50
99
100
0
1
50
99
100
0
1
50
99
100
0
1
50
99
100
0
1
50
99
100
0
1
50
99
100
0
1
50
99
100
0
1
50
99
100
0
1
50
99
100
0
1
50
99
100
0
1
Expected Output
Fail
Fail
Fail
Fail
Fail
Fail
Second division
Second division
Fail
Fail
Second division
First division
First division
Fail
Fail
Second division
First division
First division
Fail
Fail
Fail
Third division
Second division
Fail
Fail
Fail
Second division
Second division
Fail
Fail
Second division
First division
First division
Third division
Second division
First division
First division with distinction
First division with distinction
Second division
Second division
First division
First division
First division with distinction
Fail
Fail
Third division
First division
First division
Fail
Fail
(Contd.)
54
Software Testing
(Contd.)
Test Case
83.
84.
85.
86.
87.
88.
89.
90.
91.
92.
93.
94.
95.
96.
97.
98.
99.
100.
101.
102.
103.
104.
105.
106.
107.
108.
109.
110.
111.
112.
113.
114.
115.
116.
117.
118.
119.
120.
121.
122.
123.
124.
125.
mark1
99
99
99
99
99
99
99
99
99
99
99
99
99
99
99
99
99
99
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
100
mark2
1
1
1
50
50
50
50
50
99
99
99
99
99
100
100
100
100
100
0
0
0
0
0
1
1
1
1
1
50
50
50
50
50
99
99
99
99
99
100
100
100
100
100
mark3
50
99
100
0
1
50
99
100
0
1
50
99
100
0
1
50
99
100
0
1
50
99
100
0
1
50
99
100
0
1
50
99
100
0
1
50
99
100
0
1
50
99
100
Expected Output
Second division
First division
First division
Third division
Second division
First division
First division with distinction
First division with distinction
First division
First division
First division with distinction
First division with distinction
First division with distinction
First division
First division
First division with distinction
First division with distinction
First division with distinction
Fail
Fail
Second division
First division
First division
Fail
Fail
Second division
First division
First division
Second division
Second division
First division
First division with distinction
First division with distinction
First division
First division
First division with distinction
First division wit distinction
First division with distinction
First division
First division
First division with distinction
First division with distinction
First division with distinction
Example 2.7: Consider the program for classification of a triangle in example 2.3. Generate
robust and worst test cases for this program.
Solution: Robust test cases and worst test cases are given in Table 2.14 and Table 2.15
respectively.
Functional Testing
55
Table 2.14.
Test Case
Expected Output
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
0
1
2
50
99
100
101
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
0
1
2
99
100
101
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
0
1
2
99
100
101
Table 2.15.
Test Case
Expected Output
1.
2.
Invalid triangle
3.
50
Invalid triangle
4.
99
Invalid triangle
5.
100
Invalid triangle
6.
Invalid triangle
7.
8.
50
Invalid triangle
9.
99
Invalid triangle
10.
100
Invalid triangle
11.
50
Invalid triangle
12.
50
Invalid triangle
13.
50
50
14.
50
99
Invalid triangle
15.
50
100
Invalid triangle
16.
17.
1
1
99
99
1
2
Invalid triangle
Invalid triangle
(Contd.)
56
Software Testing
(Contd.)
Test Case
Expected Output
18.
99
50
Invalid triangle
19.
99
99
20.
99
100
Invalid triangle
21.
100
Invalid triangle
22.
100
Invalid triangle
23.
100
50
Invalid triangle
24.
100
99
Invalid triangle
25.
100
100
26.
Invalid triangle
27.
28.
50
Invalid triangle
29.
99
Invalid triangle
30.
100
Invalid triangle
31.
32.
33.
50
Invalid triangle
34.
99
Invalid triangle
35.
100
Invalid triangle
36.
50
Invalid triangle
37.
50
Invalid triangle
38.
50
50
39.
50
99
Invalid triangle
40.
50
100
Invalid triangle
41.
99
Invalid triangle
42.
99
Invalid triangle
43.
99
50
Invalid triangle
44.
99
99
Acute angled
45.
99
100
46.
100
Invalid triangle
47.
100
Invalid triangle
48.
100
50
Invalid triangle
49.
100
99
50.
100
100
51.
50
Invalid triangle
52.
50
Invalid triangle
53.
50
50
54.
50
99
Invalid triangle
55.
50
100
Invalid triangle
56.
50
Invalid triangle
(Contd.)
Functional Testing
57
(Contd.)
Test Case
Expected Output
57.
50
Invalid triangle
58.
50
50
59.
50
99
Invalid triangle
60.
50
100
Invalid triangle
61.
50
50
62.
50
50
63.
50
50
50
64.
50
50
99
65.
50
50
100
Invalid triangle
66.
50
99
Invalid triangle
67.
50
99
Invalid triangle
68.
50
99
50
69.
50
99
99
70.
50
99
100
71.
50
100
Invalid triangle
72.
50
100
Invalid triangle
73.
50
100
50
Invalid triangle
74.
50
100
99
75.
50
100
100
76.
99
Invalid triangle
77.
99
Invalid triangle
78.
99
50
Invalid triangle
79.
99
99
80.
99
100
Invalid triangle
81.
99
Invalid triangle
82.
99
Invalid triangle
83.
99
50
Invalid triangle
84.
99
99
85.
99
100
86.
99
50
Invalid triangle
87.
99
50
Invalid triangle
88.
99
50
50
89.
99
50
99
90.
99
50
100
91.
99
99
92.
99
99
93.
99
99
50
94.
95.
99
99
99
99
99
100
58
Software Testing
(Contd.)
Test Case
Expected Output
96.
99
100
Invalid triangle
97.
99
100
98.
99
100
50
99.
99
100
99
100.
99
100
100
101.
100
Invalid triangle
102.
100
Invalid triangle
103.
100
50
Invalid triangle
104.
100
99
Invalid triangle
105.
100
100
106.
100
Invalid triangle
107.
100
Invalid triangle
108.
100
50
Invalid triangle
109.
100
99
110.
100
100
111.
100
50
Invalid triangle
112.
100
50
Invalid triangle
113.
100
50
50
Invalid triangle
114.
100
50
99
115.
100
50
100
116.
100
99
Invalid triangle
117.
100
99
118.
100
99
50
119.
100
99
99
120.
100
99
100
121.
100
100
122.
100
100
123.
100
100
50
124.
100
100
99
125.
100
100
100
Example 2.8: Consider the program for the determination of day of the week as explained in
example 2.4. Design the robust and worst test cases for this program.
Solution: Robust test cases and worst test cases are given in Table 2.16 and Table 2.17
respectively.
Functional Testing
59
Table 2.16.
Test Case
month
day
year
Expected Output
1.
15
1979
Invalid date
2.
15
1979
Monday
3.
15
1979
Thursday
4.
15
1979
Friday
5.
11
15
1979
Thursday
6.
12
15
1979
Saturday
7.
13
15
1979
Invalid date
8.
1979
Invalid date
9.
1979
Friday
10.
1979
Saturday
11.
30
1979
Saturday
12.
31
1979
Invalid date
13.
32
1979
Invalid date
14.
15
1899
15.
15
1900
Friday
16.
15
1901
Saturday
17.
15
2057
Friday
18.
15
2058
Saturday
19.
15
2059
Table 2.17. Worst case test cases for the program determining day of the week
Test Case
month
day
year
Expected Output
1.
1900
Monday
2.
1901
Tuesday
3.
1979
Monday
4.
2057
Monday
5.
2058
Tuesday
6.
1900
Tuesday
7.
1901
Wednesday
8.
1979
Tuesday
9.
2057
Tuesday
10.
2058
Wednesday
11.
15
1900
Monday
12.
15
1901
Tuesday
13.
15
1979
Monday
14.
15
2057
Monday
15.
15
2058
Tuesday
(Contd.)
60
Software Testing
(Contd.)
Test Case
month
day
year
Expected Output
16.
30
1900
Tuesday
17.
30
1901
Wednesday
18.
30
1979
Tuesday
19.
30
2057
Tuesday
20.
30
2058
Wednesday
21.
31
1900
Wednesday
22.
31
1901
Thursday
23.
31
1979
Wednesday
24.
31
2057
Wednesday
25.
31
2058
Thursday
26.
1900
Thursday
27.
1901
Friday
28.
1979
Thursday
29.
2057
Thursday
30.
2058
Friday
31.
1900
Friday
32.
1901
Saturday
33.
1979
Friday
34.
2057
Friday
35.
2058
Saturday
36.
15
1900
Thursday
37.
15
1901
Friday
38.
15
1979
Thursday
39.
15
2057
Thursday
40.
15
2058
Friday
41.
30
1900
Invalid date
42.
30
1901
Invalid date
43.
30
1979
Invalid date
44.
30
2057
Invalid date
45.
30
2058
Invalid date
46.
31
1900
Invalid date
47.
31
1901
Invalid date
48.
31
1979
Invalid date
49.
31
2057
Invalid date
50.
31
2058
Invalid date
51.
1900
Friday
52.
1901
Saturday
(Contd.)
Functional Testing
61
(Contd.)
Test Case
month
day
year
Expected Output
53.
1979
Friday
54.
2057
Friday
55.
2058
Saturday
56.
1900
Saturday
57.
1901
Sunday
58.
1979
Saturday
59.
2057
Saturday
60.
2058
Sunday
61.
15
1900
Friday
62.
15
1901
Saturday
63.
15
1979
Friday
64.
15
2057
Friday
65.
15
2058
Saturday
66.
30
1900
Saturday
67.
30
1901
Sunday
68.
30
1979
Saturday
69.
30
2057
Saturday
70.
30
2058
Sunday
71.
31
1900
Invalid date
72.
31
1901
Invalid date
73.
31
1979
Invalid date
74.
31
2057
Invalid date
75.
31
2058
Invalid date
76.
11
1900
Thursday
77.
11
1901
Friday
78.
11
1979
Thursday
79.
11
2057
Thursday
80.
11
2058
Friday
81.
11
1900
Friday
82.
11
1901
Saturday
83.
11
1979
Friday
84.
11
2057
Friday
85.
11
2058
Saturday
86.
11
15
1900
Thursday
87.
11
15
1901
Friday
88.
11
15
1979
Thursday
89.
11
15
2057
Thursday
(Contd.)
62
Software Testing
(Contd.)
Test Case
month
day
year
Expected Output
90.
11
15
2058
Friday
91.
11
30
1900
Friday
92.
11
30
1901
Saturday
93.
11
30
1979
Friday
94.
11
30
2057
Friday
95.
11
30
2058
Saturday
96.
11
31
1900
Invalid date
97.
11
31
1901
Invalid date
98.
11
31
1979
Invalid date
99.
11
31
2057
Invalid date
100.
11
31
2058
Invalid date
101.
12
1900
Saturday
102.
12
1901
Sunday
103.
12
1979
Saturday
104.
12
2057
Saturday
105.
12
2058
Sunday
106.
12
1900
Sunday
107.
12
1901
Monday
108.
12
1979
Sunday
109.
12
2057
Sunday
110.
12
2058
Monday
111.
12
15
1900
Saturday
112.
12
15
1901
Sunday
113.
12
15
1979
Saturday
114.
12
15
2057
Saturday
115.
12
15
2058
Sunday
116.
12
30
1900
Sunday
117.
12
30
1901
Monday
118.
12
30
1979
Sunday
119.
12
30
2057
Sunday
120.
12
30
2058
Monday
121.
12
31
1900
Monday
122.
12
31
1901
Tuesday
123.
12
31
1979
Monday
124.
12
31
2057
Monday
125.
12
31
2058
Tuesday
Functional Testing
63
Input x
Expected Output
I1
Invalid Input
I2
50
2500
I3
101
Invalid Input
The following equivalence classes can be generated for program Addition for input
domain:
(i) I1 = { 100 x 300 and 200 y 400 } (Both x and y are valid values)
(ii) I2 = { 100 x 300 and y < 200 } (x is valid and y is invalid)
(iii) I3 = { 100 x 300 and y > 400 } (x is valid and y is invalid)
(iv) I4 = { x < 100 and 200 y 400 } (x is invalid and y is valid)
(v) I5 = { x > 300 and 200 y 400 } (x is invalid and y is valid)
(vi) I6 = { x < 100 and y < 200 } (Both inputs are invalid)
(vii) I7 = { x < 100 and y > 400} (Both inputs are invalid)
(viii) I8 = { x > 300 and y < 200 } (Both inputs are invalid)
(ix) I9 = { x > 300 and y > 400 } (Both inputs are invalid)
64
Software Testing
The graphical representation of inputs is shown in Figure 2.9 and the test cases are given in
Table 2.19.
Expected Output
I1
200
300
500
I2
200
199
Invalid input
I3
200
401
Invalid input
I4
99
300
Invalid input
I5
301
300
Invalid input
I6
99
199
Invalid input
I7
99
401
Invalid input
I8
301
199
Invalid input
I9
301
401
Invalid input
The equivalence classes of input domain may be mutually exclusive (as shown in Figure
2.10 (a)) and they may have overlapping regions (as shown in Figure 2.10 (b)).
We may also partition output domain for the design of equivalence classes. Every output
will lead to an equivalence class. Thus, for Square program, the output domain equivalence
classes are given as:
O1 = {square of the input number x}
O2 = {Invalid input)
The test cases for output domain are shown in Table 2.20. Some of input and output domain
test cases may be the same.
Functional Testing
65
Input x
Expected Output
O1
50
2500
O2
Invalid Input
We may also design output domain equivalence classes for the program Addition as given
below:
O1 = { Addition of two input numbers x and y }
O2 = {Invalid Input}
The test cases are given in Table 2.21.
Table 2.21.
Test Case
Expected Output
O1
200
300
500
O2
99
300
Invalid Input
In the above two examples, valid input domain has only one equivalence class. We may
design more numbers of equivalence classes based on the type of problem and nature of inputs
and outputs. Here, the most important task is the creation of equivalence classes which require
domain knowledge and experience of testing. This technique reduces the number of test cases
that should be designed and executed.
2.2.2 Applicability
It is applicable at unit, integration, system and acceptance test levels. The basic requirement is
that inputs or outputs must be partitioned based on the requirements and every partition will
give a test case. The selected test case may test the same thing, as would have been tested by
another test case of the same equivalence class, and if one test case catches a bug, the other
66
Software Testing
probably will too. If one test case does not find a bug, the other test cases of the same
equivalence class may also not find any bug. We do not consider dependencies among different
variables while designing equivalence classes.
The design of equivalence classes is subjective and two testing persons may design two
different sets of partitions of input and output domains. This is understandable and correct as
long as the partitions are reviewed and all agree that they acceptably cover the program under
test.
Example 2.9: Consider the program for determination of the largest amongst three numbers
specified in example 2.1. Identify the equivalence class test cases for output and input domain.
Solution: Output domain equivalence classes are:
O1= {<x, y, z > : Largest amongst three numbers x, y, z }
O2= {<x, y, z > : Input values(s) is /are out of range with sides x, y, z }
The test cases are given in Table 2.22.
Table 2.22.
Test Case
O1
150
140
110
150
50
301
O2
50
Expected Output
300 and 1
I2 = { x < 1 and 1
300 and 1
z
I4 = { 1
300 and 1
y
300 and 1
I6 = { 1
I7 = { 1
300 and 1
300 and 1
I3 = { 1
300 and y > 300 and z > 300 }( x is valid, y is invalid and z is invalid)
I16 = {1
300 and y < 1 and z > 300 } ( x is valid, y is invalid and z is invalid)
Functional Testing
I17 = { 1
67
300 and y > 300 and z < 1 } ( x is valid, y is invalid and z is invalid)
I20 = { x < 1 and y < 1 and z < 1 } (All inputs are invalid)
I21 = { x > 300 . and y > 300 and z > 300 } ( All inputs are invalid)
I22 = { x < 1 and y < 1 and z > 300 } (All inputs are invalid)
I23 = { x < 1 and y > 300 and z < 1 } (All inputs are invalid)
124 = { x > 300 and y < 1 and z < 1 } (All inputs are invalid)
125 = { x > 300 and y > 300 and z < 1 } (All inputs are invalid)
I26 = { x > 300 and y < 1 and z > 300 } (All inputs are invalid)
I27 = { x < 1 and y > 300 and z > 300 } (All inputs are invalid)
The input domain test cases are given in Table 2.23.
Table 2.23. Input domain test case
Test Case
Expected Output
I1
150
40
50
150
I2
50
50
I3
50
50
I4
50
50
I5
101
50
50
I6
50
101
50
I7
50
50
101
I8
50
I9
50
I10
50
I11
301
301
50
I12
50
301
301
I13
301
50
301
I14
301
50
I15
301
50
I16
50
301
I17
50
301
I18
50
301
I19
301
50
68
Software Testing
(Contd.)
Test Case
Expected Output
I20
I21
301
301
301
I22
301
I23
301
I24
301
I25
301
301
I26
301
301
I27
301
301
Example 2.10: Consider the program for the determination of division of a student as
explained in example 2.2. Identify the equivalence class test cases for output and input
domains.
Solution: Output domain equivalence class test cases can be identified as follows:
O1 =
O2 =
O3 =
O4 =
O5 =
O6 =
The test cases generated by output domain are given in Table 2.24.
Table 2.24. Output domain test cases
Test Case
mark1
mark2
mark3
Expected Output
O1
75
80
85
O2
68
68
68
First division
O3
55
55
55
Second division
O4
45
45
45
Third division
O5
25
25
25
Fail
O6
-1
50
50
Invalid marks
mark3
Functional Testing
69
I3 = { 0 mark1 100 and mark2 < 0 and 0 mark3 100 } (mark1 is valid, mark2 is
invalid and mark3 is valid)
I4 = { 0 mark1 100 and 0 mark2 100 and mark3 < 0 } (mark1 is valid, mark2 is
valid and mark3 is invalid)
I5 = { mark1 > 100 and 0 mark2 100 and 0 mark3 100 } (mark1 is invalid,
mark2 is valid and mark3 is valid)
I6 = ( 0 mark1 100 and mark2 > 100 and 0 mark3 100 } (mark1 is valid, mark2
is invalid and mark3 is valid)
I7 = { 0 mark1 100 and 0 mark2 100 and mark3 > 100 } (mark 1 is valid, mark2
is valid and mark3 is invalid )
I8 = { mark1 < 0 and mark2 < 0 and 0 mark3 100 } (mark1 is invalid, mark2 is
invalid and mark3 is valid)
I9 = { 0 mark1 100 and mark2 < 0 and mark3 < 0 } (mark1 is valid, mark2 is invalid
and mark3 is invalid)
I10 = { mark1 < 0 and 0 mark2 100 and mark3 < 0 } (mark1 is invalid, mark2 is valid
and mark3 is invalid)
I11 = { mark1 > 100 and mark2 > 100 and 0 mark3 100 } (mark1 is invalid, mark2
is invalid and mark3 is valid)
I12 = { 0 mark1 100 and mark2 > 100 and mark3 > 100 } (mark1 is valid, mark2 is
invalid and mark3 is invalid)
I13 = { mark1 > 100 and 0 mark2 100 and mark3 > 100 } (mark1 is invalid, mark2
is valid and mark3 is invalid)
I14 = { mark1 < 0 and mark2 > 100 and 0 mark3 100 } (mark1 is invalid, mark2 is
invalid and mark 3 is valid)
I15 = { mark1 > 100 and mark2 < 0 and 0 mark3 100 }{ (mark1 is invalid, mark2 is
invalid and mark3 is valid)
I16 = { 0 mark1 100 and mark2 < 0 and mark3 > 100 } (mark1 is valid, mark2 is
invalid and mark3 is invalid)
I17 = { 0 mark1 100 and mark2 > 100 and mark3 < 0 } (mark1 is valid, mark2 is
invalid and mark3 is invalid)
I18 = { mark1 < 0 and 0 mark2 100 and mark3 > 100 } (mark1 is invalid, mark2 is
valid and mark3 is invalid)
I19 = { mark1 > 100 and 0 mark2 100 and mark3 < 0 } (mark1 is invalid, mark2 is
valid and mark3 is invalid)
I20 = { mark1 < 0 and mark2 < 0 and mark3 < 0 } (All inputs are invalid)
I21 = { mark1 > 100 and mark2 > 100 and mark3 > 100 } (All inputs are invalid)
I22 = { mark1 < 0 and mark2 < 0 and mark3 > 100 } (All inputs are invalid)
I23 = { mark1 < 0 and mark2 > 100 and mark3 < 0 } (All inputs are invalid)
I24 = { mark1 > 100 and mark2 < 0 and mark3 < 0 } (All inputs are invalid)
I25 = {mark1 > 100 and mark2 > 100 and mark3 < 0 } (All inputs are invalid)
70
Software Testing
I26 = { mark1 > 100 and mark2 < 0 and mark3 > 100 } (All inputs are invalid)
I27 = { mark1 < 0 and mark2 > 100 and mark3 > 100 } (All inputs are invalid)
Thus, 27 test cases are generated on the basis of input domain and are given in Table 3.25.
Table 2.25. Input domain test cases
Test Case
mark1
mark2
mark3
Expected Output
I1
50
50
50
Second division
I2
50
50
Invalid marks
I3
50
50
Invalid marks
I4
50
50
Invalid marks
I5
101
50
50
Invalid marks
I6
50
101
50
Invalid marks
I7
50
50
101
Invalid marks
I8
50
Invalid marks
I9
50
Invalid marks
I10
50
Invalid marks
I11
101
101
50
Invalid marks
I12
50
101
101
Invalid marks
I13
101
50
101
Invalid marks
I14
101
50
Invalid marks
I15
101
50
Invalid marks
I16
50
101
Invalid marks
I17
50
101
Invalid marks
I18
50
101
Invalid marks
I19
101
50
Invalid marks
I20
Invalid marks
I21
101
101
101
Invalid marks
I22
101
Invalid marks
I23
101
Invalid marks
I24
101
Invalid marks
I25
101
101
Invalid marks
I26
101
101
Invalid marks
I27
101
101
Invalid marks
Hence, the total number of equivalence class test cases are 27 (input domain) + 6 (output
domain) which is equal to 33.
Example 2.11: Consider the program for classification of a triangle specified in example 2.3.
Identify the equivalence class test cases for output and input domain.
Solution: Output domain equivalence classes are:
Functional Testing
O1=
O2=
O3=
O4=
O5=
71
Expected Output
O1
O2
O3
O4
O5
50
50
57
50
101
40
49
40
50
50
30
49
40
100
50
72
Software Testing
I14 = { a < 1 and b > 100 and 1 c 100 } ( a is invalid, b is invalid and c
valid)
I15 = { a > 100 and b < 1 and 1 c 100 } ( a is invalid, b is invalid and c
valid)
I16 = {1 a 100 and b < 1 and c > 100 } ( a is valid, b is invalid and c
invalid)
I17 = { 1 a 100 and b > 100 and c < 1 } ( a is valid, b is invalid and c
invalid)
I18 = { a < 1 and 1 b 100 and c > 100 } (a is invalid, b is valid and c
invalid)
I19 = { a > 100 and 1 b 100 and c < 1 } (a is invalid, b is valid and c
invalid)
I20 = { a < 1 and b < 1 and c < 1 } (All inputs are invalid)
I21 = { a > 100 and b > 100 and c > 100 } ( All inputs are invalid)
I22 = { a < 1 and b < 1 and c > 100 } (All inputs are invalid)
I23 = { a < 1 and b > 100 and c < 1 } (All inputs are invalid)
124 = { a > 100 and b < 1 and c < 1 } (All inputs are invalid)
125 = { a > 100 and b > 100 and c < 1 } (All inputs are invalid)
I26 = { a > 100 and b < 1 and c > 100 } (All inputs are invalid)
I27 = { a < 1 and b > 100 and c > 100 } (All inputs are invalid)
Some input domain test cases can be obtained using the relationship amongst a, b and c.
I28 =
I29 =
I30 =
I31 =
I32 =
I33 =
I34 =
I35 =
I36 =
I37 =
I38 =
I39 =
I40 =
I41 =
I42 =
I43 =
{ a2 = b2 + c2 }
{ b2 = c2 + a2 }
{ c2 = a2 + b2 }
{ a2 > b2 + c2 }
{ b2 > c2 + a2 }
{ c2 > a2 + b2 }
{ a2 < b2 + c2 }
{ b2 < c2 + a2 }
{ c2 < a2 + b2 }
{a=b+c}
{a>b+c}
{b=c+a}
{b>c+a}
{c=a+b}
{c>a+b}
{ a2 < b2 + c2 && b2 < c2 + a2 && c2 < a2 + b2 }
is
is
is
is
is
is
Functional Testing
Expected Output
I1
I2
I3
I4
I5
I6
I7
I8
I9
I10
I11
I12
I13
I14
I15
I16
I17
I18
I19
I20
I21
I22
I23
I24
I25
I26
I27
I28
I29
I30
I31
I32
I33
I34
I35
I36
I37
I38
I39
I40
I41
I42
I43
50
0
50
50
101
50
50
0
50
0
101
50
101
0
101
50
50
0
101
0
101
0
0
101
101
101
0
50
40
40
57
40
40
50
49
49
100
100
50
40
50
40
49
50
50
0
50
50
101
50
0
0
50
101
101
50
101
0
0
101
50
50
0
101
0
101
0
101
0
101
40
50
30
40
57
40
49
50
49
50
40
100
100
50
40
49
50
50
50
0
50
50
101
50
0
0
50
101
101
50
50
101
0
101
0
0
101
101
0
0
0
101
101
30
30
50
40
50
57
49
49
50
50
40
50
40
100
100
50
73
74
Software Testing
Hence, total number of equivalence class test cases are 43 (input domain) and 5 (output
domain) which is equal to 48.
Example 2.12: Consider the program for determining the day of the week as explained in
example 2.4. Identify the equivalence class test cases for output and input domains.
Solution: Output domain equivalence classes are:
O1 = { < Day, Month, Year > : Monday for all valid inputs }
O2 = { < Day, Month, Year > : Tuesday for all valid inputs }
O3 = { < Day, Month, Year > : Wednesday for all valid inputs}
O4 = { < Day, Month, Year > : Thursday for all valid inputs}
O5 = { < Day, Month, Year > : Friday for all valid inputs}
O6 = { < Day, Month, Year > : Saturday for all valid inputs}
O7 = { < Day, Month, Year > : Sunday for all valid inputs}
O8 = { < Day, Month, Year > : Invalid Date if any of the input is invalid}
O9 = { < Day, Month, Year > : Input out of range if any of the input is out of range}
The output domain test cases are given in Table 2.28.
Table 2.28. Output domain equivalence class test cases
Test Case
O1
O2
O3
O4
O5
O6
O7
O8
O9
month
6
6
6
6
6
6
6
6
6
day
11
12
13
14
15
16
17
31
32
year
1979
1979
1979
1979
1979
1979
1979
1979
1979
(ii)
Valid partitions
M1: Month has 30 Days
M2 : Month has 31 Days
M3 : Month is February
D1 : Days of a month from 1 to 28
D2 : Day = 29
D3 : Day = 30
D4 : Day = 31
Y1 : 1900 year 2058 and is a common year
Y2 : 1900 year 2058 and is a leap year.
Invalid partitions
M4 : Month < 1
Expected Output
Monday
Tuesday
Wednesday
Thursday
Friday
Saturday
Sunday
Invalid date
Inputs out of range
Functional Testing
M5 : Month > 12
D5 : Day < 1
D6 : Day > 31
Y3 : Year < 1900
Y4 : Year > 2058
We may have following equivalence classes which are based on input domain:
(a)
(b)
75
76
Software Testing
I31 = { M4 and D4 and Y1 } (Month is invalid, Day is valid and Year is valid)
I32 = { M5 and D4 and Y1 } (Month is invalid, Day is valid and year is valid)
I33 = { M4 and D1 and Y2 } (Month is invalid, Day is valid and Year is valid)
I34 = { M5 and D1 and Y2 } (Month is invalid, Day is valid and Year is valid)
I35 = { M4 and D2 and Y2 } (Month is invalid, Day is valid and Year is valid)
I36 = { M5 and D2 and Y2 } (Month is invalid, Day is valid and Year is valid)
I37 = { M4 and D3 and Y2 } (Month is invalid, Day is valid and Year is valid)
I38 = { M5 and D3 and Y2 } (Month is invalid, Day is valid and Year is valid)
I39 = { M4 and D4 and Y2 } (Month is invalid, Day is valid and Year is valid)
I40 = { M5 and D4 and Y2 } (Month is invalid, Day is valid and Year is valid)
I41 = { M1 and D5 and Y1 } (Month is valid, Day is invalid and Year is valid)
I42 = { M1 and D6 and Y1 } (Month is valid, Day is invalid and Year is valid)
I43 = { M2 and D5 and Y1 } (Month is valid, Day is invalid and Year is valid)
I44 = { M2 and D6 and Y1 } (Month is valid, Day is invalid and Year is valid)
I45 = { M3 and D5 and Y1 } (Month is valid, Day is invalid and Year is valid)
I46 = { M3 and D6 and Y1 } (Month is valid, Day is invalid and Year is valid)
I47 = { M1 and D5 and Y2 } (Month is valid, Day is invalid and Year is valid)
I48 = { M1 and D6 and Y2 } (Month is valid, Day is invalid and Year is valid)
I49 = { M2 and D5 and Y2 } (Month is valid, Day is invalid and Year is valid)
I50 = { M2 and D6 and Y2 } (Month is valid, Day is invalid and Year is valid)
I51 = { M3 and D5 and Y2 } (Month is valid, Day is invalid and Year is valid)
I52 = { M3 and D6 and Y2 } (Month is valid, Day is invalid and Year is valid)
I53 = { M1 and D1 and Y3 } (Month is valid, Day is valid and Year is invalid)
I54 = { M1 and D1 and Y4 } (Month is valid, Day is valid and Year is invalid)
I55 = { M2 and D1 and Y3 } (Month is valid, Day is valid and Year is invalid)
I56 = { M2 and D1 and Y4 } (Month is valid, Day is valid and Year is invalid)
I57 = { M3 and D1 and Y3 } (Month is valid, Day is valid and Year is invalid)
I58 = { M3 and D1 and Y4 } (Month is valid, Day is valid and Year is invalid)
I59 = { M1 and D2 and Y3 } (Month is valid, Day is valid and Year is invalid)
I60 = { M1 and D2 and Y4 } (Month is valid, Day is valid and Year is invalid)
I61 = { M2 and D2 and Y3 } (Month is valid, Day is valid and Year is invalid)
I62 = { M2 and D2 and Y4 } (Month is valid, Day is valid and Year is invalid)
I63 = { M3 and D2 and Y3 } (Month is valid, Day is valid and Year is invalid)
I64 = { M3 and D2 and Y4 } (Month is valid, Day is valid and Year is invalid)
I65 = { M1 and D3 and Y3 } (Month is valid, Day is valid and Year is invalid)
I66 = { M1 and D3 and Y3 } (Month is valid, Day is valid and Year is invalid)
I67 = { M2 and D3 and Y3 } (Month is valid, Day is valid and Year is invalid)
Functional Testing
I68 = { M2 and D3 and Y4 } (Month is valid, Day is valid and Year is invalid)
I69 = { M3 and D3 and Y3 } (Month is valid, Day is valid and Year is invalid)
I70 = { M3 and D3 and Y4 } (Month is valid, Day is valid and Year is invalid)
I71 = { M1 and D4 and Y3 } (Month is valid, Day is valid and Year is invalid)
I72 = { M1 and D4 and Y4 } (Month is valid, Day is valid and Year is invalid)
I73 = { M2 and D4 and Y3 } (Month is valid, Day is valid and Year is invalid)
I74 = { M2 and D4 and Y4 } (Month is valid, Day is valid and Year is invalid)
I75 = { M3 and D4 and Y3 } (Month is valid, Day is valid and Year is invalid)
I76 = { M3 and D4 and Y4 } (Month is valid, Day is valid and Year is invalid)
I77 = { M4 and D5 and Y1 } (Month is invalid, Day is invalid and Year is valid)
178 = { M4 and D5 and Y2 } (Month is invalid, Day is invalid and year is valid)
I79 = { M4 and D6 and Y1 } (Month is invalid, Day is invalid and Year is valid)
I80 = { M4 and D6 and Y2 } (Month is invalid, Day is invalid and Year is valid)
I81 = { M5 and D5 and Y1 } (Month is invalid, Day is invalid and Year is valid)
I82 = { M5 and D5 and Y2 } (Month is invalid, Day is invalid and Year is valid)
I83 = { M5 and D6 and Y1 } (Month is invalid, Day is invalid and Year is valid)
I84 = { M5 and D6 and Y2 } (Month is invalid, Day is invalid and Year is valid)
I85 = { M4 and D1 and Y3 } (Month is invalid, Day is valid and Year is invalid)
I86 = { M4 and D1 and Y4 } (Month is invalid, Day is valid and Year is invalid)
I87 = { M4 and D2 and Y3 } (Month is invalid, Day is valid and Year is invalid)
I88 = { M4 and D2 and Y4 } (Month is invalid, Day is valid and Year is invalid)
I89 = { M4 and D3 and Y3 } (Month is invalid, Day is valid and Year is invalid)
I90 = { M4 and D3 and Y4 } (Month is invalid, day is valid and Year is invalid)
I91 = { M4 and D4 and Y3 } (Month is invalid, Day is valid and Year is invalid)
I92 = { M4 and D4 and Y4 } (Month is invalid, Day is valid and Year is invalid)
I93 = { M5 and D1 and Y3 } (Month is invalid, Day is valid and Year is invalid)
I94 = { M5 and D1 and Y4 } (Month is invalid, Day is valid and Year is invalid)
I95 = { M5 and D2 and Y3 } (Month is invalid, Day is valid and year is invalid)
I96 = { M5 and D2 and Y4 } (Month is invalid, Day is valid and Year is invalid)
I97 = { M5 and D3 and Y3 } (Month is invalid, Day is valid and Year is invalid)
I98 = { M5 and D3 and Y4 } (Month is invalid, Day is valid and Year is invalid)
I99 = { M5 and D4 and Y3 } (Month is invalid, Day is valid and Year is invalid)
I100 = { M5 and D4 and Y4 } (Month is invalid, Day is valid and Year is invalid)
I101 = { M1 and D5 and Y3 } (Month is valid, Day is invalid and Year is invalid)
I102 = { M1 and D5 and Y4 } (Month is valid, Day is invalid and Year is invalid)
I103 = { M2 and D5 and Y3 } (Month is valid, Day is invalid and Year is invalid)
I104 = { M2 and D5 and Y4 } (Month is valid, Day is invalid and Year is invalid)
77
78
Software Testing
I105 = { M3 and D5 and Y3 } (Month is valid, Day is invalid and Year is invalid)
I106 = { M3 and D5 and Y4 } (Month is valid, Day is invalid and Year is invalid)
I107 = { M1 and D6 and Y3 } (Month is valid, Day is invalid and Year is invalid)
I108 = { M1 and D6 and Y4 } (Month is valid, Day is invalid and Year is invalid)
I109 = { M2 and D6 and Y3 } (Month is valid, Day is invalid and Year is invalid)
I110 = { M2 and D6 and Y4 } (Month is valid, Day is invalid and Year is invalid)
I111 = { M3 and D6 and Y3 } (Month is valid, Day is invalid and Year is invalid)
I112 = { M3 and D6 and Y4 } (Month is valid, Day is invalid and Year is invalid)
I113 = ( M4 and D5 and Y3 } (All inputs are invalid)
I114 = { M4 and D5 and Y4 } (All inputs are invalid)
I115 = { M4 and D6 and Y3 } (All inputs are invalid)
I116 = { M4 and D6 and Y4 } (All inputs are invalid)
I117 = { M5 and D5 and Y3 } (All inputs are invalid)
I118 = { M5 and D5 and Y4 } (All inputs are invalid)
I119 = { M5 and D6 and Y3 } (All inputs are invalid)
I120 = { M5 and D6 and Y4 } (All inputs are invalid)
The test cases generated on the basis of input domain are given in Table 2.29.
Table 2.29. Input domain equivalence class test cases
Test Case
month
day
year
Expected Output
I1
I2
I3
I4
I5
I6
I7
I8
I9
I10
I11
I12
I13
I14
I15
I16
I17
I18
I19
I20
I21
I22
6
5
2
6
5
2
6
5
2
6
5
2
6
5
2
6
5
2
6
5
2
6
15
15
15
29
29
29
30
30
30
31
31
31
15
15
15
29
29
29
30
30
30
31
1979
1979
1979
1979
1979
1979
1979
1979
1979
1979
1979
1979
2000
2000
2000
2000
2000
2000
2000
2000
2000
2000
Friday
Tuesday
Thursday
Friday
Tuesday
Invalid Date
Saturday
Wednesday
Invalid Date
Invalid Date
Thursday
Invalid Date
Thursday
Monday
Tuesday
Thursday
Monday
Tuesday
Friday
Tuesday
Invalid date
Invalid date
(Contd.)
Functional Testing
79
(Contd.)
Test Case
I23
I24
I25
I26
I27
I28
I29
I30
I31
I32
I33
I34
I35
I36
I37
I38
I39
I40
I41
I42
I43
I44
I45
I46
I47
I48
I49
I50
I51
I52
I53
I54
I55
I56
I57
I58
I59
I60
I61
I62
I63
I64
I65
I66
I67
I68
month
5
2
0
13
0
13
0
13
0
13
0
13
0
13
0
13
0
13
6
6
5
5
2
2
6
6
5
5
2
2
6
6
5
5
2
2
6
6
5
5
2
2
6
6
5
5
day
31
31
15
15
29
29
30
30
31
31
15
15
29
29
30
30
31
31
0
32
0
32
0
32
0
32
0
32
0
32
15
15
15
15
15
15
29
29
29
29
29
29
30
30
30
30
year
2000
2000
1979
1979
1979
1979
1979
1979
1979
1979
2000
2000
2000
2000
2000
2000
2000
2000
1979
1979
1979
1979
1979
1979
2000
2000
2000
2000
2000
2000
1899
2059
1899
2059
1899
2059
1899
2059
1899
2059
1899
2059
1899
2059
1899
2059
Expected Output
Wednesday
Invalid date
Input(s) out of range
Input(s) out of range
Inputs(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
(Contd.)
80
Software Testing
(Contd.)
Test Case
I69
I70
I71
I72
I73
I74
I75
I76
I77
I78
I79
I80
I81
I82
I83
I84
I85
I86
I87
I88
I89
I90
I91
I92
I93
I94
I95
I96
I97
I98
I99
I100
I101
I102
I103
I104
I105
I106
I107
I108
I109
I110
I111
I112
I113
I114
month
2
2
6
6
5
5
2
2
0
0
0
0
13
13
13
13
0
0
0
0
0
0
0
0
13
13
13
13
13
13
13
13
5
5
6
6
2
2
5
5
6
6
2
2
0
0
day
30
30
31
31
31
31
31
31
0
0
32
32
0
0
32
32
15
15
20
29
30
30
31
31
15
15
29
29
30
30
31
31
0
0
0
0
0
0
32
32
32
32
32
32
0
0
year
1899
2059
1899
2059
1899
2059
1899
2059
1979
2000
1979
2000
1979
2000
1979
2000
1899
2059
1899
2059
1899
2059
1899
2059
1899
2059
1899
2059
1899
2059
1899
2059
1899
2059
1899
2059
1899
2059
1899
2059
1899
2059
1899
2059
1899
2059
Expected Output
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
(Contd.)
Functional Testing
81
(Contd.)
Test Case
I115
I116
I117
I118
I119
I120
month
0
0
13
13
13
13
day
32
32
0
0
32
32
year
1899
2059
1899
2059
1899
2059
Expected Output
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Input(s) out of range
Hence, the total number of equivalence class test cases are 120 (input domain) + 9 (output
domain) which is equal to 129. However, most of the outputs are Input out of range and may
not offer any value addition. This situation occurs when we choose more numbers of invalid
equivalence classes.
It is clear that if the number of valid partitions of input domain increases, then the number
of test cases increases very significantly and is equal to the product of the number of partitions
of each input variable. In this example, there are 5 partitions of input variable month, 6
partitions of input variable day and 4 partitions of input variable year and thus leading to
5x6x4 = 120 equivalence classes of input domain.
c1
c2
c3
Action
a1
a2
a3
a4
Entries
Four Portions
1.
2.
3.
4.
Condition Stubs
Condition Entries
Action Stubs
Action Entries
82
Software Testing
Action Entries: Each entry in the action entries portion has some associated action or set of
actions in this lower right portion of the table. These values are known as outputs and are
dependent upon the functionality of the program.
R1
F
X
R2
T
F
X
R3
T
T
F
R4
T
T
T
X
X
X
In Table 2.31, input values are only True (T) or False (F), which are binary conditions. The
decision tables which use only binary conditions are known as limited entry decision tables.
The decision tables which use multiple conditions where a condition may have many
possibilities instead of only true and false are known as extended entry decision tables
[COPE04].
Condition
Action
c1: a < b + c?
c2: b < c + a?
c3: c < a + b?
c4: a2 = b2 + c2?
c5: a2 > b2 + c2?
c6: a2 < b2 + c2?
Rule Count
a1 : Invalid triangle
a2 : Right angled triangle
a3 : Obtuse angled triangle
a4 : Acute angled triangle
a5 : Impossible
F
32
X
T
F
16
X
T
T
F
8
X
T
T
T
T
T
T
1
T
T
T
T
T
F
1
T
T
T
T
F
T
1
T
T
T
T
F
F
1
T
T
T
F
T
T
1
T
T
T
F
T
F
1
T
T
T
F
F
T
1
T
T
T
F
F
F
1
X
X
X
X
Functional Testing
83
The do not care conditions are represented by the -sign. A do not care condition has no
effect on the output. If we refer to column 1 of the decision table, where condition c1: a < b +
c is false, then other entries become do not care entries. If c1 is false, the output will be
Invalid triangle irrespective of any state (true or false) of other conditions like c2, c3, c4, c5
and c6. These conditions become do not care conditions and are represented by -sign. If we
do not do so and represent all true and false entries of every condition, the number of columns
in the decision table will unnecessarily increase. This is nothing but a representation facility in
the decision table to reduce the number of columns and avoid redundancy. Ideally, each
column has one rule and that leads to a test case. A column in the entry portion of the table is
known as a rule. In the Table 2.32, a term is used as rule count and 32 is mentioned in column
1. The term rule count is used with do not care entries in the decision table and has a value
1, if do not care conditions are not there, but it doubles for every do not care entry. Hence
each do not care condition counts for two rules. Rule count can be calculated as:
Rule count = 2 number of do not care conditions
However, this is applicable only for limited entry decision tables where only true and
false conditions are considered. Hence, the actual number of columns in any decision table
is the sum of the rule counts of every column shown in the decision table. The triangle
classification decision table has 11 columns as shown in Table 2.32. However the actual
columns are a sum of rule counts and are equal to 64. Hence, this way of representation has
reduced the number of columns from 64 to 11 without compromising any information. If rule
count value of the decision table does not equal to the number of rules computed by the
program, then the decision table is incomplete and needs revision.
2.3.5 Applicability
Decision tables are popular in circumstances where an output is dependent on many conditions
and a large number of decisions are required to be taken. They may also incorporate complex
business rules and use them to design test cases. Every column of the decision table generates
a test case. As the size of the program increases, handling of decision tables becomes difficult
and cumbersome. In practice, they can be applied easily at unit level only. System testing and
integration testing may not find its effective applications.
Example 2.13: Consider the problem for determining of the largest amongst three numbers as
given in example 2.1. Identify the test cases using the decision table based testing.
Solution: The decision table is given in Table 2.33.
84
Software Testing
F
-
T
F
-
T
T
F
-
T
T
T
F
-
T
T
T
T
F
-
T
T
T
T
T
F
-
T
T
T
T
T
T
T
T
T
T
T
T
T
T
T
T
T
F
T
T
T
T
T
T
T
F
T
T
T
T
T
T
T
T
F
F
T
T
T
T
T
T
F
T
T
T
T
T
T
T
T
F
T
F
T
T
T
T
T
T
F
F
T
T
T
T
T
T
T
F
F
F
Rule Count
a1 : Invalid input
a2 : x is largest
a3 : y is largest
a4 : z is largest
a5 : Impossible
256
X
128
X
64
X
32
X
16
X
8
X
X
X
Table 2.34.
Test Case
Expected Output
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
0
301
50
50
50
50
?
150
150
150
110
140
120
?
50
50
0
301
50
50
?
130
130
130
150
150
140
?
50
50
50
50
0
301
?
110
170
140
140
120
150
?
Invalid marks
Invalid marks
Invalid marks
Invalid marks
Invalid marks
Invalid marks
Impossible
150
170
150
150
150
150
Impossible
Example 2.14: Consider the problem for determining the division of the student in example
2.2. Identify the test cases using the decision table based testing.
Solution: This problem can be solved using either limited entry decision table or extended
entry decision table. The effectiveness of any solution is dependent upon the creation of
various conditions. The limited entry decision table is given in Table 2.35 and its associated
test cases are given in Table 2.36. The impossible inputs are shown by ? as given in test cases
8, 9, 10, 12, 13, 14, 16, 17, 19 and 22. There are 11 impossible test cases out of 22 test cases
which is a very large number and compel us to look for other solutions.
c5 : mark3 > = 0 ?
c7 : 0
a1 : Invalid marks
a7 : Impossible
a6 : Fail
a5 : Third division
a4 : Second division
a3 : First division
1024
74 ?
Rule Count
avg
c10 : 60
59 ?
avg
c9 : 50
49 ?
c11
avg
c8 : 40
39 ?
c3 : mark2 > = 0 ?
avg
512
c1 : mark1 > = 0 ?
Conditions
Table 2.35.
256
128
64
32
10
11
12
13
14
15
16
17
18
19
20
21
22
Functional Testing
85
86
Software Testing
There are 22 test cases corresponding to each column in the decision table. The test cases are
given in Table 2.36.
Table 2.36.
Test Case
mark1
mark2
mark3
Expected Output
1.
50
50
Invalid marks
2.
101
50
50
Invalid marks
3.
50
50
Invalid marks
4.
50
101
50
Invalid marks
5.
50
50
Invalid marks
6.
50
50
101
Invalid marks
7.
Impossible
8.
Impossible
9.
Impossible
10.
Impossible
11.
25
25
25
Fail
12.
Impossible
13.
Impossible
14.
Impossible
15.
45
45
45
Third division
16.
Impossible
17.
Impossible
18.
55
55
55
Second division
19.
Impossible
20.
65
65
65
First division
21.
80
80
80
22.
Impossible
The input domain may be partitioned into the following equivalence classes:
I1 = { A1 : 0
mark1
100 }
I2 = { A2 : mark1 < 0 }
I3 = { A3 : mark1 > 100 }
I4 = { B1 : 0
mark2
100 }
Functional Testing
87
mark3
100 }
I8 = { C2 : mark3 < 0 }
I9 = { C3 : mark3 > 100 }
I10 = { D1 : 0
avg
39 }
I11 = { D2 : 40
avg
49 }
I12 = { D3 : 50
avg
59 }
I13 = { D4 : 60
avg
74}
I14 = { D5 : avg
75 }
10
11
c1 : mark1 in
A1
A1
A1
A1
A1
A1
A1
A1
A1
A2
A3
c2 : mark 2 in
B1
B1
B1
B1
B1
B1
B1
B2
B3
c3 : mark3 in
C1
C1
C1
C1
C1
C2
C3
c4 : avg in
D1
D2
D3
D4
D5
Rule Count
15
15
45
45
X
X
X
X
Here 2numbers of do not care conditions formula cannot be applied because this is an extended entry
decision table where multiple conditions are used. We have made equivalence classes for
mark1, mark2, mark3 and average value. In column 6, rule count is 5 because average value
is do not care otherwise the following combinations should have been shown:
A1, B1, C2, D1
A1, B1, C2, D2
A1, B1, C2, D3
88
Software Testing
mark1
mark2
mark3
Expected Output
1.
25
25
25
Fail
2.
45
45
45
Third Division
3.
55
55
55
Second Division
4.
65
65
65
First Division
5.
80
80
80
6.
50
50
Invalid marks
7.
50
50
101
Invalid marks
8.
50
50
Invalid marks
9.
50
101
50
Invalid marks
10.
50
50
Invalid marks
11.
101
50
50
Invalid marks
Example 2.15: Consider the program for classification of a triangle in example 2.3. Design
the test cases using decision table based testing.
Solution: We may also choose conditions which include an invalid range of input domain, but
this will increase the size of the decision table as shown in Table 2.39. We add an action to
show that the inputs are out of range.
The decision table is given in Table 2.39 and has the corresponding test cases that are given
in Table 2.40. The number of test cases is equal to the number of columns in the decision table.
Hence, 17 test cases can be generated.
In the decision table given in Table 2.39, we assumed that a is the longest side. This time
we do not make this assumption and take all the possible conditions into consideration i.e. any
of the sides a, b or c can be longest. It has 31 rules as compared to the 17 given in Table
2.40. The full decision table is given in Table 2.41. The corresponding 55 test cases are given
in Table 2.42.
F
1048
X
c1: a<b+c?
c2: b<c+a?
c3: c<a+b?
c4: a > 0?
c6: b > 0?
c8: c > 0?
c10: a2 = b2+c2?
Rule Count
a1 : Invalid Triangle
a6 : Impossible
Conditions
Table 2.39.
1024
512
256
128
64
32
16
10
11
12
13
14
15
16
17
Functional Testing
89
90
Software Testing
a
90
40
40
0
101
50
50
50
50
?
?
?
50
?
57
50
?
b
40
90
40
50
50
0
101
50
50
?
?
?
40
?
40
49
?
c
40
40
90
50
50
50
50
0
101
?
?
?
30
?
40
49
?
Expected Output
Invalid Triangle
Invalid Triangle
Invalid Triangle
Input(s) out of Range
Input(s) out of Range
Input(s) out of Range
Input(s) out of Range
Input(s) out of Range
Input(s) out of Range
Impossible
Impossible
Impossible
Right Angled Triangle
Impossible
Obtuse Angled Triangle
Acute Angled Triangle
Impossible
Table 2.41.
Conditions
c1: a < b+c?
c2: b < c+a?
c3: c < a+b?
c4: a > 0?
c5: a <= 100?
c6: b > 0?
c7: b <= 100?
c8: c > 0?
c9: c <= 100?
c10: a2 = b2+c2?
c11: b2 = c2+a2?
c12: c2 = a2+b2?
c13: a2 > b2+c2?
c14: b2 > c2+a2?
c15: c2 > a2+b2?
Rule Count
a1 : Invalid triangle
a2 : Input(s) out of
range
a3 : Right angled
triangle
a4 : Obtuse angled
triangle
a5 : Acute angled
triangle
a6 : Impossible
1
2
3
4
F
T
T
T
F
T
T
F
T
F
16384 8192 4096 2048
X
X
X
X
5
T
T
T
T
F
1024
6
T
T
T
T
T
F
512
7
T
T
T
T
T
T
F
256
8
9
10
T
T
T
T
T
T
T
T
T
T
T
T
T
T
T
T
T
T
T
T
T
F
T
T
F
T
T
T
128 64 16
11
T
T
T
T
T
T
T
T
T
T
F
T
8
X
(Contd.)
Functional Testing
91
(Contd.)
Conditions
12
13
14
15
16
17
18
19
20
21
22
23
24
T
T
T
T
T
T
T
T
T
T
F
F
T
-
T
T
T
T
T
T
T
T
T
T
F
F
F
T
-
T
T
T
T
T
T
T
T
T
T
F
F
F
F
T
T
T
T
T
T
T
T
T
T
T
F
F
F
F
F
T
T
T
T
T
T
T
T
T
F
T
T
-
T
T
T
T
T
T
T
T
T
F
T
F
T
-
T
T
T
T
T
T
T
T
T
F
T
F
F
T
-
T
T
T
T
T
T
T
T
T
F
T
F
F
F
T
T
T
T
T
T
T
T
T
T
F
T
F
F
T
F
T
T
T
T
T
T
T
T
T
F
F
T
T
-
T
T
T
T
T
T
T
T
T
F
F
T
F
T
-
T
T
T
T
T
T
T
T
T
F
F
T
F
F
T
T
T
T
T
T
T
T
T
T
F
F
T
F
F
F
Rule Count
a1 : Invalid triangle
a2 : Input(s) out of
range
a3 : Right angled
triangle
a4 : Obtuse angled
triangle
a5 : Acute angled
triangle
a6 : Impossible
Conditions
25
26
27
28
29
30
31
c4: a > 0?
c6: b > 0?
c8: c > 0?
c11: b = c +a ?
c12: c2 = a2+b2?
c10: a = b +c ?
(Contd.)
92
Software Testing
(Contd.)
Conditions
25
26
27
28
29
30
31
c15: c > a +b ?
Rule Count
c13: a > b +c ?
c14: b > c +a ?
a1 : Invalid triangle
a2 : Input(s) out of range
a3 : Right angled triangle
a4 : Obtuse angled triangle
X
X
Expected Output
1.
90
40
40
Invalid Triangle
2.
40
90
40
Invalid Triangle
3.
40
40
90
Invalid Triangle
4.
50
50
5.
101
50
50
6.
50
50
7.
50
101
50
8.
50
50
9.
50
50
101
10.
Impossible
11.
Impossible
12.
Impossible
13.
Impossible
14.
Impossible
15.
50
40
30
16.
Impossible
17.
Impossible
18.
Impossible
19.
Impossible
20.
40
50
30
Functional Testing
93
(Contd.)
Test Case
Expected Output
21.
Impossible
22.
Impossible
23.
Impossible
24.
40
30
50
25.
Impossible
26.
Impossible
27.
57
40
40
28.
Impossible
29.
40
57
40
30.
40
40
57
31.
50
49
49
Example 2.16: Consider a program for the determination of day of the week specified in
example 2.4. Identify the test cases using decision table based testing.
Solution: The input domain can be divided into the following classes:
I1 = { M1 : month has 30 days }
I2 = { M2 : month has 31 days }
I3 = { M3 : month is February }
I4 = { M4 : month <1 }
I5 = { M5 : month > 12 }
I6 = { D1 : 1
Day
28 }
I7 = { D2 : Day = 29 }
I8 = { D3 : Day = 30 }
I9 = { D4 : Day = 31 }
I10 = { D5 : Day < 1 }
I11 = { D6 : Day > 31 }
I12 = { Y1 : 1900
Year
I13 = { Y2 : 1900
Year
D1
Y1
c1 : Months in
c2 : Days in
c3 : Years in
Rule Count
38
M3
D1
Y2
1
39
M3
D1
Y3
1
Test Case
c1 : Months in
c2 : Days in
c3 : Years in
Rule Count
a1: Invalid Date
a2 : Day of the week
a3 : Input out of range
22
M2
D1
Y4
1
21
M2
D1
Y3
1
Test Case
c1 : Months in
c2 : Days in
c3 : Years in
Rule Count
a1 : Invalid Date
a2 : Day of the week
a3 : Input out of range
Y3
D1
M1
Y2
D1
M1
a1 : Invalid Date
a2 : Day of the
week
a3 : Input out of
range
M1
Test Case
Table 2.43.
41
M3
D2
Y1
1
X
Y1
D2
24
M2
D2
Y2
1
40
M3
D1
Y4
1
5
M1
23
M2
D2
Y1
1
Y4
D1
M1
25
M2
D2
Y3
1
42
M3
D2
Y2
1
Y2
D2
M1
43
M3
D2
Y3
1
Y3
Y4
D2
M1
44
M3
D2
Y4
1
26
M2
D2
Y4
1
D2
M1
45
M3
D3
Y1
1
X
27
M2
D3
Y1
1
Y1
D3
M1
46
M3
D3
Y2
1
X
28
M2
D3
Y2
1
Y2
D3
M1
10
47
M3
D3
Y3
1
29
M2
D3
Y3
1
Y3
D3
M1
11
48
M3
D3
Y4
1
30
M2
D3
Y4
1
Y4
D3
M1
12
49
M3
D4
Y1
1
X
31
M2
D4
Y1
1
Y1
D4
M1
13
32
M2
D4
Y2
1
50
M3
D4
Y2
1
X
Y2
D4
M1
14
The decision table is given in Table 2.43 and the corresponding test cases are given in Table 2.44.
15
33
M2
D4
Y3
1
51
M3
D4
Y3
1
Y3
D4
M1
16
52
M3
D4
Y4
1
Y4
17
D5
M1
53
M3
D5
1
34
M2
D4
Y4
1
D4
M1
18
54
M3
D6
1
35
M2
D5
4
D6
M1
19
55
M4
24
36
M2
D6
4
Y1
D1
M2
20
56
M5
24
37
M3
D1
Y1
1
Y2
D1
M2
94
Software Testing
Functional Testing
95
month
day
year
Expected Output
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
2
2
2
2
2
2
2
2
2
15
15
15
15
29
29
29
29
30
30
30
30
31
31
31
31
0
32
15
15
15
15
29
29
29
29
30
30
30
30
31
31
31
31
0
32
15
15
15
15
29
29
29
29
30
1979
2000
1899
2059
1979
2000
1899
2059
1979
2000
1899
2059
1979
2000
1899
2059
1979
1979
1979
2000
1899
2059
1979
2000
1899
2059
1979
2000
1899
2059
1979
2000
1899
2059
1979
1979
1979
2000
1899
2059
1979
2000
1899
2059
1979
Friday
Thursday
Input out of range
Input out of range
Friday
Thursday
Input out of range
Input out of range
Saturday
Friday
Input out of range
Input out of range
Invalid date
Invalid date
Input out of range
Input out of range
Input out of range
Input out of range
Tuesday
Monday
Input out of range
Input out of range
Tuesday
Monday
Input out of range
Input out of range
Wednesday
Tuesday
Input out of range
Input out of range
Thursday
Wednesday
Input out of range
Input out of range
Input out of range
Input out of range
Thursday
Tuesday
Input out of range
Input out of range
Invalid date
Tuesday
Input out of range
Input out of range
Invalid date
(Contd.)
96
Software Testing
(Contd.)
Test Case
month
day
year
Expected Output
46.
47.
48.
49.
50.
51.
52.
53.
54.
55.
56.
2
2
2
2
2
2
2
2
2
0
13
30
30
30
31
31
31
31
0
32
0
32
2000
1899
2059
1979
2000
1899
2059
1979
1979
1899
1899
Invalid date
Input out of range
Input out of range
Invalid date
Invalid date
Input out of range
Input out of range
Input out of range
Input out of range
Input out of range
Input out of range
The product of number of partitions of each input variable (or equivalence classes) is 120.
The decision table has 56 columns and 56 corresponding test cases are shown in Table 2.44.
Functional Testing
97
In Figure 2.12, each node represents either true (present) or false (absent) state and may be
assigned 1 and 0 value respectively. The purpose of four functions is given as:
(a) Identity: This function states that if c1 is 1, then e1 is 1; else e1 is 0.
(b) NOT: This function states that if c1 is 1, then e1 is 0; else e1 is 1.
(c) AND: This function states that if both c1 and c2 are 1, then e1 is 1; else e1 is 0.
(d) OR: This function states that if either c1 or c2 is 1, then e1 is 1; else e1 is 0.
The AND and OR functions are allowed to have any number of inputs.
Exclusive
The Exclusive (E) constraint states that at most one of c1 or c2 can be 1 (c1 or c2 cannot
be 1 simultaneously). However, both c1 and c2 can be 0 simultaneously.
Inclusive
The Inclusive (I) constraints states that at least one of c1 or c2 must always be 1. Hence,
both cannot be 0 simultaneously. However, both can be 1.
One and Only One
The one and only one (O) constraint states that one and only one of c1 and c2 must be 1.
98
Software Testing
(d)
(e)
Requires
The requires (R) constraint states that for c1 to be 1, c2 must be 1; it is impossible for
c1 to be 1 if c2 is 0.
Mask
This constraint is applicable at the effect side of the cause-effect graph. This states that
if effect e1 is 1, effect e2 is forced to be 0.
These five constraint symbols can be applied to a cause-effect graph depending upon the
relationships amongst causes (a, b, c and d) and effects (e). They help us to represent real life
situations in the cause-effect graph.
Consider the example of keeping the record of marital status and number of children of a
citizen. The value of marital status must be U or M. The value of the number of children
must be digit or null in case a citizen is unmarried. If the information entered by the user is
correct then an update is made. If the value of marital status of the citizen is incorrect, then the
error message 1 is issued. Similarly, if the value of number of children is incorrect, then the
error message 2 is issued.
The causes are:
c1: marital status is U
c2: marital status is M
c3: number of children is a digit
and the effects are:
e1: updation made
e2: error message 1 is issued
e3: error message 2 is issued
Functional Testing
99
The cause-effect graph is shown in Figure 2.14. There are two constraints exclusive
(between c1 and c2) and requires (between c3 and c2), which are placed at appropriate places in
the graph. Causes c1 and c2 cannot occur simultaneously and for cause c3 to be true, cause c2
has to be true. However, there is no mask constraint in this graph.
Figure 2.14. Example of cause-effect graph with exclusive (constraint) and requires constraint
2.4.6 Applicability
Cause-effect graphing is a systematic method for generating test cases. It considers dependency
of inputs using some constraints.
This technique is effective only for small programs because, as the size of the program
increases, the number of causes and effects also increases and thus complexity of the causeeffect graph increases. For large-sized programs, a tool may help us to design the cause-effect
graph with the minimum possible complexity.
It has very limited applications in unit testing and hardly any application in integration
testing and system testing.
Example 2.17: A tourist of age greater than 21 years and having a clean driving record is
supplied a rental car. A premium amount is also charged if the tourist is on business, otherwise
it is not charged.
If the tourist is less than 21 year old, or does not have a clean driving record, the system will
display the following message:
Car cannot be supplied
Draw the cause-effect graph and generate test cases.
100
Software Testing
1
F
-
2
T
F
-
3
T
T
F
4
T
T
T
X
X
X
Table 2.46.
Test Case
Age
Driving_record_clean
On_business
Expected Output
1.
2.
20
26
Yes
No
Yes
Yes
3.
62
Yes
No
4.
62
Yes
Yes
Example 2.18: Consider the triangle classification problem (a is the largest side) specified
in example 2.3. Draw the cause-effect graph and design decision table from it.
Functional Testing
101
Solution:
The causes are:
c1 :
c2 :
c3 :
c4 :
c5 :
c6 :
e1 :
e2 :
e3 :
e4 :
e5 :
The cause-effect graph is shown in Figure 2.16 and the decision table is shown in Table 2.47.
Table 2.47.
Conditions
c1 : a<b+c
c2 : b<a+c
c3 : c<a+b
c4 : a2=b2+c2
c5 : a2>b2+c2
c6 : a2<b2+c2
e1 : Invalid Triangle
e2 : Right angled Triangle
e3 : Obtuse angled triangle
e4 : Acute angled triangle
e5 : Impossible
0
X
X
X
X
X
1
1
0
X
X
X
X
1
1
1
0
X
X
X
1
1
1
1
1
1
1
1
1
1
1
1
0
1
1
1
1
0
1
1
1
1
1
0
0
1
1
1
0
1
1
1
1
1
0
1
0
1
1
1
0
0
1
1
1
1
0
0
0
1
1
1
1
102
Software Testing
2.2
2.3
2.4
2.5
2.6
2.7
2.8
Functional Testing
103
104
Software Testing
Functional Testing
105
2.24 The decision table which uses only binary conditions is known as:
(a) Limited entry decision table
(b) Extended entry decision table
(c) Advance decision table
(d) None of the above
2.25 In cause-effect graphing technique, causes and effects are related to:
(a) Outputs and inputs
(b) Inputs and outputs
(c) Sources and destinations
(d) Destinations and sources
2.26 Cause-effect graphing is one form of:
(a) Structural testing
(b) Maintenance testing
(c) Regression testing
(d) Functional testing
2.27 Which is not a constraint applicable at the causes side in the cause-effect graphing
technique?
(a) Exclusive
(b) Inclusive
(c) Masks
(d) Requires
2.28 Which is not a basic notation used in a cause-effect graph?
(a) NOT
(b) OR
(c) AND
(d) NAND
2.29 Which is the term used for functional testing?
(a) Black box testing
(b) Behavioural testing
(c) Functionality testing
(d) All of the above
2.30 Functional testing does not involve:
(a) Source code analysis
(b) Black box testing techniques
(c) Boundary value analysis
(d) Robustness testing
EXERCISES
2.1 What is functional testing? How do we perform it in limited time and with limited
resources?
2.2 What are the various types of functional testing techniques? Discuss any one with the
help of an example.
2.3 Explain the boundary value analysis technique with a suitable example.
106
Software Testing
2.4 Why do we undertake robustness testing? What are the additional benefits? Show
additional test cases with the help of an example and justify the significance of these
test cases.
2.5 What is worst-case testing? How is it different from boundary value analysis? List the
advantages of using this technique.
2.6 Explain the usefulness of robust worst-case testing. Should we really opt for this
technique and select a large number of test cases? Discuss its applicability and
limitations.
2.7 Consider a program that determines the previous date. Its inputs are a triple of day,
month and year with its values in the range:
1 month 12
1 day 31
1850 year 2050
The possible outputs are previous date or invalid input. Design boundary value
analysis test cases, robust test cases, worst-case test cases and robust worst-case test
cases.
2.8 Consider the program to find the median of three numbers. Its input is a triple of
positive integers (say x, y and z) and values are from interval [100,500]. Generate
boundary, robust and worst-case test cases.
2.9 Consider a program that takes three numbers as input and print the values of these
numbers in descending order. Its input is a triple of positive integers (say x, y and z)
and values are from interval [300,700]. Generate boundary value, robust and worst
case test cases.
2.10 Consider a three-input program to handle personal loans of a customer. Its input is a
triple of positive integers (say principal, rate and term).
1000 principal 40000
1 rate 18
1 term 6
The program should calculate the interest for the whole term of the loan and the total
amount of the personal loan. The output is:
interest = principal * (rate/100) * term
total_amount = principal + interest
Generate boundary value, robust and worst-case test cases
2.11 The BSE Electrical company charges its domestic consumers using the following
slab:
Consumption units
Energy charges
0150
151300
301400
>400
Identify the equivalence class test cases for output and input domain.
2.12 An telephone company charges its customer using the following calling rates:
Functional Testing
Call
Rates
0-75
76-200
201-500
>500
Rs. 500
Rs. 500 + Rs. 0.80 per call in excess of 75 calls
Rs. 500 + Rs. 1.00 per call in excess of 200 calls
Rs. 500 + Rs 1.20 per unit in excess of 500 calls
107
Identify the equivalence class test cases for the output and input domain.
2.13 Consider an example of grading a student in a university. The grading is done as given
below:
Average marks
Grade
90 100
75 89
60 74
50 59
0 49
Exemplary Performance
Distinction
First Division
Second Division
Fail
The marks of any three subjects are considered for the calculation of average marks.
Generate boundary value analysis test cases and robust test cases. Also create
equivalence classes and generate test cases.
2.14 Consider a program for the classification of a triangle. Its input is a triple of positive
integers (say, a, b and c) from the interval [1, 100]. The output may be one of the
following words [scalene, Isosceles, Equilateral, Not a triangle]. Design the boundary
value test cases and robust worst-case test cases. Create equivalence classes and design
test cases accordingly.
2.15 Consider a program that determines the next date. Given a month, day and year as
input, it determines the date of the next day. The month, day and year have integer
values subject to the following conditions:
C1: 1 month 12
C2: 1 day 31
C3: 1800 year 2025
We are allowed to add new conditions as per our requirements.
(a) Generate boundary value analysis test cases, robust test cases, worst-case test
cases and robust worst-case test cases
(b) Create equivalence classes and generate test cases
(c) Develop a decision table and generate test cases
2.16 Consider a program for the determination of the nature of roots of a quadratic equation.
Its input is a triple of positive integers (say a, b and c) and values may be from interval
[0, 100]. The output may have one of the following words:
[Not a quadratic equation, Real roots, Imaginary roots, Equal roots]
(a) Design boundary value analysis test cases, robust test cases, worst-case test cases
and robust worst-case test cases
(b) Create equivalence classes and generate test cases
(c) Develop a decision table and generate test cases
2.17 Explain the equivalence class testing technique. How is it different from boundary
value analysis technique?
108
Software Testing
2.18 Discuss the significance of decision tables in testing. What is the purpose of a rule
count? Explain the concept with the help of an example.
2.19 What is the cause-effect graphing technique? What are basic notations used in a causeeffect graph? Why and how are constraints used in such a graph?
2.20 Consider a program to multiply and divide two numbers. The inputs may be two valid
integers (say a and b) in the range of [0, 100].
(a) Generate boundary value analysis test cases and robust test cases
(b) Create equivalence class and generate test cases
(c) Develop a decision table and generate test cases
(d) Design a cause-effect graph and write test cases accordingly
2.21 Consider the following points based on faculty appraisal and development system of a
university:
Points Earned
University view
16
68
8 10
10 12
12 15
FURTHER READING
A resource for pre-1981 literature which contains a huge bibliography up to 1981:
E.F. Miller and W.E. Howden, Tutorial: Software Testing and Validation
Techniques, IEEE Computer Society, New York, 1981.
A hands-on guide to the black-box testing techniques:
B. Beizer, Black-Box Testing: Techniques for Functional Testing of Software
and Systems, John Wiley and Sons, 1995.
An introductory book on software testing with a special focus on functional testing is:
P. C. Jorgensen, Software Testing: A Craftsman Approach, 3rd ed., Auerbach
Publications, New York, 2007.
Other useful texts are:
W.R. Elmendorf, Cause Effect Graphs in Functional Testing, TR-00.2487,
IBM System Development Division, Poughkeepsie, New York, 1973.
Functional Testing
109
3
Essentials of Graph Theory
Graph theory has been used extensively in computer science, electrical engineering, communication
systems, operational research, economics, physics and many other areas. Any physical situation
involving discrete objects may be represented by a graph along with their relationships amongst
them. In practice, there are numerous applications of graphs in modern science and technology.
Graph theory has recently been used for representing the connectivity of the World Wide Web.
Global internet connectivity issues are studied using graphs like the number of links required to
move from one web page to another and the links which are used to establish this connectivity.
It has also provided many ways to test a program. Some testing techniques are available which
are based on the concepts of graph theory.
111
In the Figure 3.1(a), the node and edge sets are given as:
V = (n1, n2, n3, n4, n5, n6)
E = (e1, e2, e3, e4) = ((n1, n2), (n1, n3), (n2, n4), (n2, n5))
Similarly in Figure 3.1(b), the node and edge sets are
V= (n1, n2, n3, n4, n5, n6)
E = (e1, e2, e3, e4) = (<n1, n2>, <n1, n3>, <n2, n4>, <n2, n5>)
The only difference is that edges are the ordered pairs of nodes represented by <n1, n2>
rather than unordered pairs (n1, n2). For any graph (directed or undirected), a set of nodes and
a set of edges between pairs of nodes are required for the construction of the graph.
An edge of a graph having the same node at its end points is called a loop. The direction of
the edge is not important because the initial and final nodes are one and the same. In Figure
3.2(a), edge e1 is a loop with node n1 and may be represented as e1 = (n1, n1).
112
Software Testing
If certain pairs of nodes are connected by more than one edge in a directed or undirected
graph, then these edges are known as parallel edges. In Figure 3.2(b), e1 and e2 are parallel
edges connecting nodes n1 and n2. Similarly e5 and e6 are also parallel edges connecting nodes
n2 and n5. If a graph is a directed graph and parallel edges are in opposite directions, such edges
are considered as distinct edges. A graph that has neither loops nor parallel edges is called a
simple graph. A graph with one or more parallel edges is called a multigraph. These graphs are
shown in Figure 3.3.
A directed multigraph (see Figure 3.3 (c)) may have distinct parallel edges (like e1 and e2)
and parallel edges (like e5 and e6). If numbers or weights are assigned to each edge of a graph,
then such a graph is called a weighted graph.
outdeg(n1) = 2
indeg(n2) = 1
outdeg(n2) = 2
indeg(n3) = 1
outdeg(n3) = 0
indeg(n4) = 1
outdeg(n4) = 0
indeg(n5) = 1
outdeg(n5) = 0
indeg(n6) = 0
outdeg(n6) = 0
113
The degree of a node in a directed graph is the sum of indegree and outdegree of that node.
Hence, for node n in a directed graph, the degree is given as:
deg(n) = indeg(n) + outdeg(n)
Few important characteristics of nodes are identified on the basis of indegree and outdegree
and are given as:
(i)
(ii)
(iii)
(iv)
(v)
114
Software Testing
converted into matrix form. Some useful matrix representations are given in the following subsections which are used commonly in testing.
a(i, j)
0 otherwise
The incidence matrix of a graph shown in Figure 3.1(a) is given as:
n1
n2
n3
n4
n5
n6
e1 e2 e3 e4
1 1 0 0
1 0 1 1
0 1 0 0
0
0
0
0
1
0
0
1
The sum of entries of any column of incidence matrix is always 2. This is because a column
represents an edge which has only two endpoints. If, any time, the sum is not two, then there
is some mistake in the transfer of information. If we add entries of a row, which is corresponding
to a node, we get the degree of that node. If all entries of a row are 0s, then the corresponding
node is an isolated node. Incidence matrix of a graph may be used to calculate various
properties of a graph which can further be programmed, if so desired. The incidence matrix of
a directed graph is the same as the incidence matrix of an undirected graph.
115
n1 n 2 n3 n 4 n5 n6
n1
n2
n3
n4
n5
n6
0
1
1
0
0
0
1
0
0
0
0
0
1
0
0
1
1
0
0
1
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
If we add entries of any row, we get the degree of the node corresponding to that row. This
is similar to incidence matrix. The adjacency matrix of an undirected graph is symmetric where
a(i, j) = a(j, i).
If we represent connections with the name of edges, then the matrix is called graph matrix.
The size is the same as adjacent matrix i.e. number of nodes in a graph. The graph matrix of a
graph shown in Figure 3.1(a) is given as:
n1
n1
n2
n3
n4
n5
n6
0
e1
e2
0
0
0
n2
n3
n 4 n5 n6
e1
0
0
e3
e4
0
e2
0
0
0
0
0
0 0
e3 e 4
0 0
0 0
0 0
0 0
0
0
0
0
0
0
a(i, j)
0 otherwise
The adjacency matrix of a directed graph may not be symmetric. If we add entries of a row,
it becomes outdegree of the node and if we add entries of a column, it becomes indegree of the
node. The adjacency matrix of a directed graph given in Figure 3.1(b) is given as:
n1 n 2 n3 n 4 n5 n6
n1
n2
n3
n4
n5
n6
0
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
116
Software Testing
Sequence of nodes
Sequence of edges
1.
n1 to n4
n1, n2, n4
e1, e3
2.
n1 to n5
n1, n2, n5
e1, e4
3.
n1 to n2
n1, n2
e1
4.
n1 to n3
n1, n3
e2
5.
n2 to n4
n2, n4
e3
6.
n2 to n5
n2, n5
e4
7.
n2 to n1
n2, n1
e1
8.
n3 to n1
n3, n1
e2
9.
n4 to n2
n4, n2
e3
10.
n4 to n1
n4, n2, n1
e3, e1
11.
n5 to n2
n5, n2
e4
12.
n5 to n1
n5, n2, n1
e4, e1
13.
n2 to n3
n2, n1, n3
e1, e2
14.
n3 to n2
n3, n1, n2
e2, e1
15.
n4 to n3
e3, e1, e2
16.
n3 to n4
e2, e1, e3
17.
n4 to n5
n4, n2, n5
e3, e4
18.
n5 to n4
n5, n2, n4
e4, e3
19.
n5 to n3
e4, e1, e2
20.
n3 to n5
e2, e1, e4
117
The direction of an edge provides more meaning to a path. Hence, paths of a directed graph
seem to be more practical and useful. They are also called chains. The paths in a directed graph
shown in Figure 3.1(b) are given in Table 3.2.
Table 3.2. Paths of directed graph in Figure 3.1(b)
S.No.
Sequence of nodes
Sequence of edges
1.
n1 to n4
n1, n2, n4
e1, e3
2.
n1 to n5
n1, n2, n5
e1, e4
3.
n1 to n2
n1, n2
e1
4.
n1 to n3
n1, n3
e2
5.
n2 to n4
n2, n4
e3
6.
n2 to n5
n2, n5
e4
3.3.1 Cycles
When the initial and final nodes of a path are the same and if length 0, the path is called a
cycle. Consider the graph given in Figure 3.5 having 6 nodes and 6 edges with a cycle
constituted by a path n1, n2, n5, n3, n1 of length 4.
118
Software Testing
A disconnected graph is the union of two or more disjoint portions of the graph. These
disjoint portions are called the connected components of the graph.
A directed graph is said to be strongly connected if there is a path from node ni to node nj;
where node ni and nj are any pair of nodes of the graph.
Every node of the graph should have a path to every other node of the graph in the strongly
connected graphs. The directed graph shown in Figure 3.7 is a strongly connected graph.
The graph shown in Figure 3.7 has the following pair of nodes:
<n1, n2>, <n1, n3>, <n4, n1>, <n3, n2>, <n2, n4>, <n3, n4>. We identify paths for every pair of
nodes.
(a)
(b)
(c)
(d)
(e)
(f)
119
A directed graph is said to be weakly connected, if there is a path between every two nodes
when the directions of the edges are not considered. A strongly connected directed graph will
also be weakly connected when we do not consider the directions of edges. Consider the graph
shown in Figure 3.8 which is weakly connected.
When we do not consider the directions, it becomes an undirected graph where there is a
path between every two nodes of the graph. Hence, this graph is weakly connected but not
strongly connected.
Example 3.1: Consider the following undirected graph and find:
(a)
(b)
(c)
(d)
120
Software Testing
Solution:
This graph is an undirected graph with seven nodes and six edges
(a)
(b)
n1
n2
n3
n4
n5
n6
n7
(c)
e2
1
0
1
0
0
0
0
e3 e 4 e5 e6
0 0 0 0
1 1 0 0
0 0 1 1
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
n1
n2
n3
n4
n5
n6
n7
(d)
e1
1
1
0
0
0
0
0
n1
0
1
1
0
0
0
0
n2
1
0
0
1
1
0
0
n3
1
0
0
0
0
1
1
n4
0
1
0
0
0
0
0
n5
0
1
0
0
0
0
0
n6 n 7
0 0
0 0
1 1
0 0
0 0
0 0
0 0
Sequence of nodes
Sequence of edges
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
n1 to n2
n1 to n4
n1 to n5
n1 to n3
n1 to n6
n1 to n7
n2 to n1
n2 to n4
n2 to n5
n3 to n1
n3 to n6
n3 to n7
n4 to n2
n4 to n1
n5 to n2
n5 to n1
n6 to n3
n6 to n1
n7 to n3
n7 to n1
n2 to n3
n2 to n6
n2 to n7
n3 to n2
n3 to n4
n3 to n5
n4 to n3
n4 to n5
n4 to n6
n4 to n7
n5 to n3
n5 to n4
n5 to n6
n5 to n7
n6 to n2
n6 to n4
n6 to n5
n6 to n7
n7 to n2
n7 to n4
n7 to n5
n7 to n6
n1, n2
n1, n2, n4
n1, n2, n5
n1, n3
n1, n3, n6
n1, n3, n7
n2, n1
n2, n4
n2, n5
n3, n1
n3, n6
n3, n7
n4, n2
n4, n2, n1
n5, n2
n5, n2, n1
n6, n2
n6, n3, n1
n7, n3
n7, n3, n1
n2, n1, n3
n2, n1, n3, n6
n2, n1, n3, n7
n3, n1, n2
n3, n1, n2, n4
n3, n1, n2, n5
n4, n2, n1, n3
n4, n2, n5
n4, n2, n1, n3, n6
n4, n2, n1, n3, n7
n5, n2, n1, n3
n5, n2, n4
n5, n2, n1, n3, n6
n5, n2, n1, n3, n7
n6, n3, n1, n2
n6, n3, n1, n2, n4
n6, n3, n1, n2, n5
n6, n3, n7
n7, n3, n1, n2
n7, n3, n1, n2, n4
n7, n3, n1, n2, n5
n7, n3, n6
e1
e1, e3
e1, e4
e2
e2, e5
e2, e6
e1
e3
e4
e2
e5
e6
e3
e3, e1
e4
e4, e1
e5
e5, e2
e6
e6, e2
e1, e2
e1, e2, e5
e1, e2, e6
e2, e1
e2, e1, e3
e1, e2, e4
e3, e1, e2
e3, e4
e3, e1, e2, e5
e3, e1, e2, e6
e4, e1, e2
e4, e3
e4, e1, e2, e5
e4, e1, e2, e6
e5, e2, e1
e5, e2, e1, e3
e5, e2, e1, e4
e5, e6
e6, e2, e1
e6, e2, e1, e3
e6, e2, e1, e4
e6, e5
121
122
Software Testing
Solution:
This graph is a directed graph with seven nodes and six edges
(a) The degrees of all nodes are given as:
indeg(n1) = 0
indeg(n2) = 1
indeg(n3) = 1
indeg(n4) = 1
indeg(n5) = 1
indeg(n6) = 1
indeg(n7) = 1
(b)
outdeg(n1) = 2
outdeg(n2) = 2
outdeg(n3) = 2
outdeg(n4) = 0
outdeg(n5) = 0
outdeg(n6) = 0
outdeg(n7) = 0
(c)
1
1
0
0
0
0
0
1
0
1
0
0
0
0
0
1
0
1
0
0
0
0
1
0
0
1
0
0
0
0
1
0
0
1
0
0
0
1
0
0
0
1
deg(n1) = 2
deg(n2) = 3
deg(n3) = 3
deg(n4) = 1
deg(n5) = 1
deg(n6) = 1
deg(n7) = 1
123
n1 n 2 n3 n 4 n5 n6 n 7
n1
n2
n3
n4
n5
n6
n7
(d)
0
1
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
S. No.
Sequence of nodes
Sequence of edges
1.
n1 to n2
n1, n2
e1
2.
n1 to n4
n1, n2, n4
e1, e3
3.
n1 to n5
n1, n2, n5
e1, e4
4.
n1 to n3
n1, n3
e2
5.
n1 to n6
n1, n3, n6
e2, e5
6.
n1 to n7
n1, n3, n7
e2, e6
7.
n2 to n4
n2, n4
e3
.8.
n2 to n5
n2, n5
e4
9.
n3 to n6
n3, n6
e5
10.
n3 to n7
n3, n7
e6
(e)
This graph is weakly connected because there is no path between several nodes; for
example, from node n4 to n2, n6 to n3, n3 to n1, n2 to n1, etc. however, if we do not
consider directions, there is a path from every node to every other node which is the
definition of a weakly connected graph.
124
Software Testing
The basic constructs are used to convert a program in its program graph. We consider the
program Square which takes a number as an input and generates the square of the number.
This program has 8 sequential statements which are represented by 8 nodes. All nodes are
arranged sequentially which may lead to only one path in this program graph. Every program
graph has one source node and one destination node.
125
We also consider a program given in Figure 3.11 that takes three numbers as input and prints
the largest amongst these three numbers as output. The program graph of the program is given
in Figure 3.12. There are 28 statements in the program which are represented by 28 nodes. All
nodes are not in a sequence which may lead to many paths in the program graph.
#include<stdio.h>
1 void main()
2 {
3 int num, result;
4 printf(Enter the number:);
5 scanf(%d, &num);
6 result=num*num;
7 printf(The result is: %d, result);
8 }
(a) Program to find 'square' of a number
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
#include<stdio.h>
#include<conio.h>
void main()
{
oat A,B,C;
clrscr();
printf("Enter number 1:\n");
scanf("%f", &A);
printf("Enter number 2:\n");
scanf("%f", &B);
printf("Enter number 3:\n");
scanf("%f", &C);
/*Check for greatest of three numbers*/
if(A>B) {
if(A>C) {
printf("The largest number is: %f\n",A);
}
else {
printf("The largest number is: %f\n",C);
}
}
else {
if(C>B) {
printf("The largest number is: %f\n",C);
}
(Contd.)
126
Software Testing
(Contd.)
23.
24.
25.
26.
27.
28.
else {
printf("The largest number is: %f\n",B);
}
}
getch();
}
Figure 3.12. Program graph to find the largest number amongst three numbers as given in Figure 3.11.
Our example is simple, so it is easy to find all paths starting from the source node (node 1)
and ending at the destination node (node 28). There are four possible paths. Every program
graph may provide some interesting observations about the structure of the program. In our
program graph given in Figure 3.12, nodes 2 to 10 are in sequence and nodes 11, 12, and 20
have two outgoing edges (predicate nodes) and nodes 18, 26, 27 have two incoming edges are
(junction nodes).
We may also come to know whether the program is structured or not. A large program may
be a structured program whereas a small program may be unstructured due to a loop in a
program. If we have a loop in a program, large number of paths may be generated as shown in
figure 1.5 of chapter 1. Myers [MYER04] has shown 1014 paths in a very small program graph
due to a loop that iterates up to 20 times. This shows how an unstructured program may lead
to difficulties in even finding every possible path in a program graph. Hence, testing a
structured program is much easier as compared to any unstructured program.
127
DD path graph
corresponding
nodes
S
N1
D
Remarks
Source node
Destination node
We consider a program to find the largest amongst three numbers as given in Figure 3.11.
The program graph is also given in Figure 3.12. There are many sequential nodes, decision
nodes, junction nodes available in its program graph. Its mapping table and the DD path graph
are given in Table 3.3 and Figure 3.14 respectively.
Table 3.3. Mapping of program graph nodes and DD graph nodes
Program graph nodes
DD path graph
corresponding node
Remarks
Source node
2 to 10
N1
11
N2
12
N3
13, 14
N4
Sequential nodes
15, 16, 17
N5
Sequential nodes
node 2 to 10
(Contd.)
128
Software Testing
(Contd.)
Program graph nodes
DD path graph
corresponding node
Remarks
18
N6
19
N7
20
N8
21, 22
N9
Sequential nodes
23, 24, 25
N10
Sequential nodes
26
N11
27
N12
28
Destination node
Figure 3.14. DD path graph of the program to find the largest among three numbers.
129
The DD path graphs are used to find paths of a program. We may like to test every identified
path during testing which may give us some level of confidence about the correctness of the
program.
Example 3.3: Consider the program for the determination of division of a student. Its input is
a triple of positive integers (mark1, mark2, mark3) and values are from interval [0, 100].
The program is given in Figure 3.15. The output may be one of the following words:
[First division with distinction, First division, Second division, Third division, Fail, Invalid
marks]. Draw the program graph and the DD path graph.
Solution:
The program graph is given in Figure 3.16. The mapping table of the DD path graph is given
in Table 3.4 and DD path graph is given in Figure 3.17.
/*Program to output division of a student based on the marks in three subjects*/
#include<stdio.h>
#include<conio.h>
1.
void main()
2.
3.
4.
clrscr();
5.
6.
7.
scanf("%d", &mark1);
8.
9.
scanf("%d", &mark2);
10.
11.
scanf("%d",&mark3);
12.
if(mark1>100||mark1<0||mark2>100||mark2<0||mark3>100||mark3<0) {
13.
14.
15.
else {
16.
avg=(mark1+mark2+mark3)/3;
17.
if(avg<40) {
18.
19.
20.
printf("Fail");
}
else if(avg>=40&&avg<50) {
21.
printf("Third Division");
22.
23.
else if(avg>=50&&avg<60) {
24.
printf("Second Division");
25.
}
(Contd.)
130
Software Testing
(Contd.)
26.
else if(avg>=60&&avg<75) {
27.
printf("First Division");
28.
29.
}
else
30.
31.
32.
33.
getch();
34.
131
DD path graph
corresponding node
Remarks
Source node
2 to 11
N1
12
N2
13, 14
N3
Sequential nodes
15, 16
N4
Sequential nodes
17
N5
18, 19
N6
Sequential nodes
20
N7
21, 22
N8
Sequential nodes
23
N9
24, 25
N10
Sequential nodes
26
N11
27, 28
N12
Sequential nodes
29, 31
N13
Sequential nodes
32
N14
and 31 are terminated here
33
N15
34
Destination node
Example 3.4: Consider the program for classification of a triangle. Its input is a triple of
positive integers (say a, b and c) and values from the interval [1, 100]. The output may be
[Right angled triangle, Acute angled triangle, Obtuse angled triangle, Invalid triangle, Input
values are out of Range]. The program is given in Figure 3.18. Draw the program graph and
the DD path graph.
Solution:
The program graph is shown in Figure 3.19. The mapping table is given in Table 3.5 and the
DD path graph is given in Figure 3.20.
132
Software Testing
/*Program to classify whether a triangle is acute, obtuse or right angled given the sides of
the triangle*/
//Header Files
#include<stdio.h>
#include<conio.h>
#include<math.h>
1.
2.
{
(Contd.)
133
(Contd.)
3.
double a,b,c;
4.
double a1,a2,a3;
5.
int valid=0;
6.
clrscr();
7.
8.
scanf("%lf",&a);
9.
10.
scanf("%lf",&b);
11.
12.
scanf("%lf",&c);
/*Checks whether a triangle is valid or not*/
13.
if(a>0&&a<=100&&b>0&&b<=100&&c>0&&c<=100) {
14.
if((a+b)>c&&(b+c)>a&&(c+a)>b) {
15.
valid=1;
16.
17.
else {
18.
valid=-1;
19.
20.
21.
if(valid==1) {
22.
a1=(a*a+b*b)/(c*c);
23.
a2=(b*b+c*c)/(a*a);
24.
a3=(c*c+a*a)/(b*b);
25.
if(a1<1||a2<1||a3<1) {
26.
27.
28.
else if(a1==1||a2==1||a3==1) {
29.
30.
31.
else {
32.
33.
34.
35.
else if(valid==-1) {
36.
printf("\nInvalid Triangle");
37.
38.
39.
}
else {
printf("\nInput Values are Out of Range");
(Contd.)
134
Software Testing
(Contd.)
40.
41.
getch();
42.
} //Main Ends
135
DD path graph
corresponding node
Remarks
Source node
2 to 12
N1
13
N2
14
N3
15, 16
N4
Sequential nodes
17, 18, 19
N5
Sequential nodes
20
N6
21
N7
22, 23, 24
N8
Sequential nodes
25
N9
26, 27
N10
Sequential nodes
28
N11
29, 30
N12
Sequential nodes
31, 32, 33
N13
Sequential nodes
34
N14
35
N15
36, 37
N16
Sequential nodes
38, 39, 40
N17
Sequential nodes
41
N18
42
Destination node.
136
Software Testing
Example 3.5: Consider the program for determining the day of the week. Its input is a triple
of day, month and year with the values in the range
1
month
day
1900
12
31
year
2058
The possible values of the output may be [Sunday, Monday, Tuesday, Wednesday, Thursday,
Friday, Saturday, Invalid date]. The program is given in Figure 3.21.
137
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
138
Software Testing
(Contd.)
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.
48.
49.
50.
51.
52.
53.
54.
55.
56.
57.
58.
59.
60.
61.
62.
63.
64.
65.
66.
67.
68.
69.
70.
71.
72.
73.
74.
75.
76.
77.
78.
79.
80.
validDate=1;
}
else {
validDate=0;
}
}
if(validDate) { /*Calculation of Day in the week*/
if(year>=1900&&year<2000){
century=0;
y1=year-1900;
}
else {
century=6;
y1=year-2000;
}
Y=y1+(y1/4);
if(month==1) {
if(leap==0) {
M=0; /*for non-leap year*/
}
else {
M=6; /*for leap year*/
}
}
else if(month==2){
if(leap==0) {
M=3; /*for non-leap year*/
}
else {
M=2; //for leap year
}
}
else if((month==3)||(month==11)) {
M=3;
}
else if((month==4)||(month==7)) {
M=6;
}
else if(month==5) {
M=1;
}
else if(month==6) {
M=4;
(Contd.)
139
(Contd.)
81.
82.
83.
84.
85.
86.
87.
88.
89.
90.
91.
92.
93.
94.
95.
96.
97.
98.
99.
100.
101.
102.
103.
104.
105.
106.
107.
108.
109.
110.
111.
112.
113.
114.
115.
116.
117.
118.
}
else if(month==8) {
M=2;
}
else if((month==9)||(month==12)) {
M=5;
}
else {
M=0;
}
date=(century+Y+M+day)%7;
if(date==0) { /*Determine the day of the week*/
printf("Day of the week for [%d:%d:%d] is Sunday",day,month,year);
}
else if(date==1) {
printf("Day of the week for [%d:%d:%d] is Monday",day,month,year);
}
else if(date==2) {
printf("Day of the week for [%d:%d:%d] is Tuesday",day,month,year);
}
else if(date==3) {
printf("Day of the week for [%d:%d:%d] is Wednesday",day,month,year);
}
else if(date==4) {
printf("Day of the week for [%d:%d:%d] is Thursday",day,month,year);
}
else if(date==5) {
printf("Day of the week for [%d:%d:%d] is Friday",day,month,year);
}
else {
printf("Day of the week for [%d:%d:%d] is Saturday",day,month,year);
}
}
else {
printf("The date entered [%d:%d:%d] is invalid",day,month,year);
}
getch();
}
Solution:
The program graph is shown in Figure 3.22. The mapping table is given in Table 3.6 and the
DD path graph is given in Figure 3.23.
140
Software Testing
141
DD path graph
corresponding node
Remarks
1
2 to 10
S
N1
Source node
11
12
13
14
15, 16
17
N2
N3
N4
N5
N6
N7
18
N8
19
20, 21
22, 23, 24
25
N9
N10
N11
N12
26
27
28, 29
30
31, 32
33, 34, 35
36
N13
N14
N15
N16
N17
N18
N19
37
38, 39
40, 41, 42
43
N20
N21
N22
N23
44
N24
45
46, 47, 48
49, 50, 51, 52
53
N25
N26
N27
N28
54
55
56, 57
58, 59, 60
N29
N30
N31
N32
from node 2 to 10
Decision node, if true goto 12, else goto 44
Decision node, if true goto 13, else goto 18
Intermediate node terminated at node 14
Decision node, if true goto 15, else goto 17
Sequential nodes
Junction node, two edges 14 and 16 are
terminated here
Junction node, two edges 12 and 17 are
terminated here. Also a decision node, if true
goto 19, else goto 26
Decision node, if true goto 20, else goto 22
Sequential nodes
Sequential nodes
Junction node, two edges 21 and 24 are
terminated here
Decision node, if true goto 27, else goto 37
Decision node, if true goto 28, else goto 30
Sequential nodes
Decision node, if true goto 31, else goto 33
Sequential nodes
Sequential nodes
Junction node, three edges 29, 32, and 35
are terminated here
Decision node, if true goto 38, else goto 40
Sequential nodes
Sequential nodes
Four edges 25, 36, 39, and 42 are terminated here
Junction node, two edges 11 and 43 are
terminated here and also a decision node. If
true goto 45, else goto 114
Decision node, if true goto 46, else goto 49
Sequential nodes
Sequential nodes
Junction node, two edges 48 and 52 are
terminated here
Decision node, if true goto 55, else goto 62
Decision node, if true goto 56, else goto 58
Sequential nodes
Sequential nodes
(Contd.)
142
Software Testing
(Contd.)
Program graph
nodes
DD path graph
corresponding node
Remarks
61
N33
62
63
64, 65
66, 67, 68
69
N34
N35
N36
N37
N38
70
71, 72
73
74, 75
76
77, 78
79
80, 81
82
83, 84
85
86, 87
88, 89, 90
91
N39
N40
N41
N42
N43
N44
N45
N46
N47
N48
N49
N50
N51
N52
92
93, 94
95
96, 97
98
99, 100
101
N53
N54
N55
N56
N57
N58
N59
102, 103
104
105, 106
107
108, 109
110, 111, 112
113
N60
N61
N62
N63
N64
N65
N66
N67
N68
118
143
144
Software Testing
V(G) = e n + 2P
where V(G) = Cyclomatic complexity
G : program graph
n : number of nodes
e : number of edges
P : number of connected components
The program graph (G) is a directed graph with single entry node and single exit node. A
connected graph is a program graph where all nodes are reachable from entry node, and exit
node is also reachable from all nodes. Such a program graph will have connected component
(P) value equal to one. If there are parts of the program graph, the value will be the number of
parts of the program graph where one part may represent the main program and other parts may
represent sub-programs.
(ii) Cyclomatic complexity is equal to the number of regions of the program graph.
(iii) Cyclomatic complexity
V(G) =
+1
where is the number of predicate nodes contained in the program graph (G).
The only restriction is that every predicate node should have two outgoing edges i.e. one for
true condition and another for false condition. If there are more than two outgoing edges,
the structure is required to be changed in order to have only two outgoing edges. If it is not
possible, then this method ( + 1) is not applicable.
Properties of cyclomatic complexity:
1.
2.
3.
4.
5.
V(G) 1
V(G) is the maximum number of independent paths in program graph G.
Addition or deletion of functional statements to program graph G does not affect V(G).
G has only one path if V(G)=1
V(G) depends only on the decision structure of G.
145
(ii)
V(G) = e n + 2P
= 75+2
= 4
V(G) = No. of regions of the graph
Hence, V(G) = 4
Three regions (1, 2 and 3) are inside and 4th is the outside region of the graph
(iii) V(G) = + 1
= 3+1=4
There are three predicate nodes namely node a, node c and node d.
These four independent paths are given as:
Path 1 :
ace
Path 2 :
ade
Path 3 :
adce
Path 4 :
acbe
We consider another program graph given in Figure 3.25 with three parts of the program
graph.
146
Software Testing
V(G) = e n + 2P`
= (4+7+8) (4+6+7) + 2x3
= 19 17 + 6
= 8
We calculate the cyclomatic complexity of each part of the graph independently.
V(G Part I) = 4 4 + 2 = 2
V(G Part II) = 7 6 + 2 = 3
V(G Part III) = 8 7 + 2 = 3
Hence, V (G Part I U G Part II U G Part III)
= V (G Part I) + V (G Part II) + V (G Part III)
In general, the cyclomatic complexity of a program graph with P connected components is
equal to the summation of their individual cyclomatic complexities. To understand this,
consider graph Gi where 1 i P denote the P connected components of a graph, and ei and ni
are the number of edges and nodes in the ith connected component of the graph. Then, we may
have the following equation:
ei
V(G) = e n+2P =
i 1
P
ni
2P
i 1
P
(ei
147
ni
i 1
2)
V(G i )
i 1
The cyclomatic complexity is a popular measure to know the complexity of any program.
It is easy to calculate and immediately provides an insight to the implementation of the
program. McCabe suggested an upper limit for this cyclomatic complexity i.e. 10 [MACC76].
If this exceeds, developers have to redesign the program to reduce the cyclomatic complexity.
The purpose is to keep the size of the program manageable and compel the testers to execute
all independent paths. This technique is more popular at module level and forces everyone to
minimize its value for the overall success of the program. There may be situations where this
limit seems unreasonable; e.g. when a large number of independent cases follow a selection
function like switch or case statement.
Example 3.6: Consider the following DD path graph (as given in Figure 3.14) and calculate
the cyclomatic complexity. Also find independent paths.
Solution:
(i)
V(G) = e n + 2P
= 16 14 + 2
= 4
148
Software Testing
(ii)
(iii) V (G) =
+1
Example 3.7: Consider the problem for determination of division of a student with the DD
path graph given in Figure 3.17. Find cyclomatic complexity and also find independent
paths.
Solution:
Number of edges (e)
Number of nodes (n)
=
=
21
17
(i) V(G) = e n + 2P = 21 17 + 2 = 6
(ii) V(G) = + 1 = 5 + 1 = 6
(iii) V(G) = Number of regions = 6
Hence cyclomatic complexity is 6 meaning there are six independent paths in the DD path
graph.
The independent paths are:
(i)
(ii)
(iii)
(iv)
(v)
(vi)
Example 3.8: Consider the classification of triangle problem given in Example 3.2 with its
DD path graph given in Figure 3.20. Find the cyclomatic complexity and also find independent
paths.
Solution:
Number of edges (e) = 25
Number of nodes (n) = 20
(i) V(G) = e n + 2P = 25 20 + 2 = 7
(ii) V(G) = + 1 = 6 + 1 = 7
(iii) V(G) = Number of regions = 7
149
Hence cyclomatic complexity is 7. There are seven independent paths as given below:
(i)
(ii)
(iii)
(iv)
(v)
(vi)
(vii)
Example 3.9: Consider the DD path graph given in Figure 3.23 for determination of the day
problem. Calculate the cyclomatic complexity and also find independent paths.
Solution:
Number of edges (e) = 96
Number of nodes (n) = 70
(i)
V(G) = e n + 2P
= 96 70 + 2 = 28
S, N1, N2, N24, N25, N27, N28, N29, N34, N39, N41, N43, N45, N47, N49, N51,
N52, N53, N54, N66, N68, D
(ii) S, N1, N2, N24, N25, N27, N28, N29, N34, N39, N41, N43, N45, N47, N49, N50,
N52, N53, N54, N66, N68, D
(iii) S, N1, N2, N24, N25, N27, N28, N29, N34, N39, N41, N43, N45, N47, N48, N52,
N53, N54, N66, N68, D
(iv) S, N1, N2, N24, N25, N27, N28, N29, N34, N39, N41, N43, N45, N46, N52, N53,
N54, N66, N68, D
(v) S, N1, N2, N24, N25, N27, N28, N29, N34, N39, N41, N43, N44, N52, N53, N54,
N66, N68, D
(vi) S, N1, N2, N24, N25, N27, N28, N29, N34, N39, N41, N42, N52, N53, N54, N66,
N68, D
(vii) S, N1, N2, N24, N25, N27, N28, N29, N34, N39, N40, N52, N53, N54, N66, N68, D
(viii) S, N1, N2, N24, N25, N27, N28, N29, N34, N35, N37, N38, N52, N53, N54, N66,
N68, D
(ix) S, N1, N2, N24, N25, N27, N28, N29, N34, N35, N36, N38, N52, N53, N54, N66,
N68, D
(x) S, N1, N2, N24, N25, N27, N28, N29, N30, N32, N33, N52, N53, N54, N66, N68, D
(xi) S, N1, N2, N24, N25, N27, N28, N29, N30, N31, N33, N52, N53, N54, N66, N68, D
(xii) S, N1, N2, N24, N25, N26, N28, N29, N30, N31, N33, N52, N53, N54, N66, N68, D
(xiii) S, N1, N2, N24, N67, N68, D
150
Software Testing
(xiv)
(xv)
(xvi)
(xvii)
(xviii)
(xix)
(xx)
(xxi)
(xxii)
(xxiii)
S, N1, N2, N3, N8, N13, N20, N22, N23, N24, N67, N68, D
S, N1, N2, N3, N8, N13, N20, N21, N23, N24, N67, N68, D
S, N1, N2, N3, N8, N13, N14, N16, N18, N19, N23, N24, N67, N68, D
S, N1, N2, N3, N8, N13, N14, N16, N17, N19, N23, N24, N67, N68, D
S, N1, N2, N3, N8, N13, N14, N15, N19, N23, N24, N67, N68, D
S, N1, N2, N3, N8, N9, N11, N12, N23, N24, N67, N68, D
S, N1, N2, N3, N8, N9, N10, N12, N23, N24, N67, N68, D
S, N1, N2, N3, N8, N9, N10, N12, N23, N24, N67, N68, D
S, N1, N2, N3, N4, N5, N6, N7, N8, N9, N10, N12, N23, N24, N67, N68, D
S, N1, N2, N24, N25, N26, N28, N29, N30, N31, N52, N53, N55, N57, N59, N61,
N63, N65, N66, N68, D
(xxiv) S, N1, N2, N24, N25, N26, N28, N29, N30N N31, N52, N53, N55, N57, N59, N61,
N63, N64, N66, N68, D
(xxv) S, N1, N2, N24, N25, N26, N28, N29, N30, N31, N52, N53, N55, N57, N59, N61,
N62, N66, N68, D
(xxvi) S, N1, N2, N24, N25, N26, N28, N29, N30, N31, N52, N53, N55, N57, N59, N60,
N66, N68, D
(xxvii) S, N1, N2, N24, N25, N26, N28, N29, N30, N31, N52, N53, N55, N57, N58, N66,
N68, D
(xxviii) S, N1, N2, N24, N25, N26, N28, N29, N30, N31, N52, N53, N55, N56, N66, N68,
D
2
3
4
5
(a)
5
e
f
c
S
S
N1
N2
N3
N1
N2
N3
N4
N5
N6
N7
N8
N9
N10
N11
N12
151
a
b
c
h
d
N4
N5
N6
N7
N8
N9
N10
N11
N12
D
(b)
Graph matrix is the tabular representation of a program graph. If we assign weight for every
entry in the table, then this may be used for the identification of independent paths. The
simplest weight is 1, if there is a connection and 0 if there is no connection. A matrix with such
weights is known as connection matrix. A connection matrix for Figure 3.26 (b) is obtained by
replacing each entry with 1, if there is a link and 0 if there is no link.
152
Software Testing
We do not show 0 entries for simplicity and blank space is treated as 0 entry as shown in
Figure 3.27.
S
N1
1
S
N1
N2
N3
N4
N5
N6
N7
N8
N9
N10
N11
N12
D
N2
N3
N4
N5
N6
N7
N8
N9
N10
N11
N12
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1-1=0
1-1=0
2-1=1
2-1=1
1-1=0
1-1=0
1-1=0
1-1=0
2-1=1
1-1=0
1-1=0
1-1=0
1-1=0
3+1=4
Figure 3.27. Connection matrix for program graph shown in Figure 3.26(b)
The connection matrix can also be used to find cyclomatic complexity as shown in Figure
3.27. Each row with more than one entry represents a predicate node and cyclomatic complexity
is predicate nodes plus one ( +1).
As we know, each graph matrix expresses a direct link between nodes. If we take the square
of the matrix, it shows 2-links relationships via one intermediate node. Hence, square matrix
represents all paths of two links long. The Kth power of matrix represents all paths of K links
long. We consider the graph matrix of the program graph given in Figure 3.26 (a) and find its
square as shown below:
1
1
2
3
4
5
3
a
4
b
1
1
2
3
4
5
e
f
g
d
c
2
ad
3
bc
5
af+bg
de
cf
cd
[A]2
[A]
There are two paths af and bg of two links between node 1 and node 5. There is no one link
path from node 1 to node 5. We will get three links paths after taking the cube of this matrix
as given below:
1
1
2
3
4
5
2
bcd
5
ade+bcf
cde
[A]3
There are two 3-links paths from node 1 to node 5 which are ade and bcf. If we want to find
four links paths, we extend this and find [A]4 as given below:
1
2
3
4
5
153
5
bcde
[A]4
There is only one four links path bcde, which is from node 1 to node 5. Our main objective is
to use the graph matrix to find all paths between all nodes. This can be obtained by summing A, A2,
A3.An-1. Hence, for the above examples, many paths are found and are given in Figure 3.28.
One link paths
a, b, c, d, e, f, g
bcde
node
node
node
node
node
node
node
node
node
node
node
node
node
node
node
node
node
1
1
1
1
2
2
2
2
3
3
3
3
4
4
4
4
5
to
to
to
to
to
to
to
to
to
to
to
to
to
to
to
to
to
node 2
node 3
node 4
node 5
node 1
node 3
node 4
node 5
node 1
node 2
node 4
node 5
node 1
node 2
node 3
node 5
all other nodes
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
ad, bcd
a, bc
b
af, bg, ade, bcf, bcde
e
d
f, de
cd
c
g, cf, cde
-
As the cyclomatic complexity of this graph is 4, there should be 4 independent paths from
node 1 (source node) to node 5 (destination node) given as:
Path 1
Path 2
Path 3
Path 4
:
:
:
:
af
bg
ade
bcf
Although 5 paths are shown, bcde does not contain any new edge or node. Thus, it cannot
be treated as an independent path in this set of paths. This technique is easy to program and
can be used easily for designing a testing tool for the calculation of cyclomatic complexity and
generation of independent paths.
154
Software Testing
Example 3.10: Consider the program graph shown in Figure 3.29 and draw the graph and
connection matrices. Find out the cyclomatic complexity and two/three link paths from a node
to any other node.
Solution:
1
1
2
3
4
5
3
a
1
1
2
3
4
5
e
b
f
3
1
1
1
1
1-1=0
1-1=0
3-1=2
1-1=0
2+1=3
Connection Matrix
2
ac
4
ad
5
ab
ce+df
[A2]
1
1
2
3
4
5
[A3]
5
ace+adf
155
This indicates that there are the following two and three link paths:
Two links paths
ace, adf
156
Software Testing
S
N1
N2
N3
N4
N5
N6
N7
N8
N9
N10
N11
N12
N13
N14
N15
N1
N2
N3
19
N4
N5
N6
14
N7
N8
15
N9
10
11
N10
16
N11
12
13
N12
17
N13
18
N14
20
N15
21
N1
N2
N3
N4
N5
N6
N7
N8
N9
N10
N11
N12
N13
N14
N5
D
1-1=0
1-1=0
1
2-1=1
N3
N4
N15
1-1=0
1-1=0
1
2-1=1
N6
1-1=0
1-1=0
1-1=0
N12
1-1=0
N13
N7
2-1=1
N8
N9
2-1=1
N10
N11
N14
N15
D
2-1=1
1-1=0
1
1-1=0
1
1-1=0
5+1=6
Example 3.12: Consider the DD path graph shown in Figure 3.33 for classification of triangle
problem and draw the graph and connection matrices.
157
Solution:
The graph and connection matrices are shown in Figure 3.34 and Figure 3.35 respectively.
S
S
N1
N2
N3
N1
N2
N3
N4
N5
N6
N9
N10
N11
N12
N13
N14
N15
N16
N17
N18
8
4
5
6
N5
N7
N8
N4
N6
N7
9
10
19
(Contd.)
158
Software Testing
(Contd.)
S
N1
N2
N3
N4
N5
N6
N7
N8
N8
N9
N10
N11
12
13
N12
N13
N14
N15
N16
N17
N18
11
N9
N10
16
N11
14
15
N12
17
N13
18
N14
22
N15
20
21
N16
23
N17
24
N18
25
N1 N2 N3 N4 N5 N6 N7 N8 N9 N10
N15
N16
N17
N18
1
1
2-1=1
2-1=1
N9
N14
1-1=0
1
N5
N8
N13
N7
N12
1-1=0
N4
N6
N11
1-1=0
1-1=0
1
1-1=0
1
2-1=1
1-1=0
1
2-1=1
N10
1-1=0
N12
1-1=0
N13
N11
2-1=1
1-1=0
N14
1-1=0
N16
1-1=0
N17
N15
N18
2-1=1
1-1=0
1
1-1=0
D
6+1=7
159
160
Software Testing
161
EXERCISES
3.1 What is a graph? Define a simple graph, multigraph and regular graph with examples.
3.2 How do we calculate degree of a node? What is the degree of an isolated node?
3.3 What is the degree of a node in a directed graph? Explain the significance of indegree
and outdegree of a node.
3.4 Consider the following graph and find the degree of every node. Is it a regular graph?
Identify source nodes, destination nodes, sequential nodes, predicate nodes and junction
nodes in this graph.
3.5 Consider the graph given in exercise 3.4 and find the following:
(i) Incidence matrix
(ii) Adjacency matrix
(iii) Paths
(iv) Connectedness
(v) Cycles
3.6 Define incidence matrix and explain why the sum of entries of any column is always 2.
What are various applications of incidence matrix in testing?
162
Software Testing
(i) Calculate the degree of every node and identify the cycles.
(ii) Is this graph strongly connected?
(iii) Draw the incidence matrix and adjacency matrix.
(iv) Find all paths.
3.13 What is cyclomatic complexity? Discuss different ways to compute it with examples.
3.14 Explain program graph notations. Use these notations to represent a program graph
from a given program.
3.15 Consider the following program segment:
/* sort takes an integer array and sorts it in ascending order*/
1. void sort (int a [ ], int n) {
2.
int i, j;
3. for(i=0;i<n-1;i++)
4.
for(i=i+1;j<n;j++)
5.
if(a[i]>a[j])
6.
{
7.
temp=a[i];
8.
a[i]=a[j];
9.
a[j]=temp;
10.
}
11. }
163
(a) Draw the control flow graph for this program segment.
(b) Determine the cyclomatic complexity for this program (show the intermediate
steps of your computation).
(c) How is the cyclomatic complexity metric useful?
3.17 Write a program to determine whether a number is even or odd. Draw the program
graph and DD path graph. Find the independent paths.
3.18 Consider the following program and draw the program path graph and DD path graph.
Also find out cyclomatic complexity and independent paths.
void main ( )
{
int x, y;
scanf (%d \n, &x);
scanf (%d \n, &y);
while (x ! = y)
{
if (x > y)
x = x y;
else y = y x;
}
printf (x = %d, x);
}
3.19 What are differences between a directed graph and un-directed graph? Which one is
more relevant in software testing and why?
3.20 How do we define the connectedness of a graph? Why every strongly connected graph
is also called weakly connected graph?
FURTHER READING
An early work on graph models of programs can be found in:
C.V. Ramamoorthy, Analysis of Graphs by Connectivity Considerations,
Journal of the ACM, vol. 13, pp. 211222, 1966.
The book by Veerarajan is an excellent text with an exhaustive number of examples:
T. Veerarajan, Discrete Mathematics with Graph Theory and Combinatorics,
McGraw Hill, 2007.
164
Software Testing
4
Structural Testing
Structural testing is considered more technical than functional testing. It attempts to design test
cases from the source code and not from the specifications. The source code becomes the base
document which is examined thoroughly in order to understand the internal structure and other
implementation details. It also gives insight in to the source code which may be used as an
essential knowledge for the design of test cases. Structural testing techniques are also known as
white box testing techniques due to consideration of internal structure and other implementation
details of the program. Many structural testing techniques are available and some of them are
given in this chapter like control flow testing, data flow testing, slice based testing and mutation
testing.
166
Software Testing
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
void main()
{
int a,b,c,x=0,y=0;
clrscr();
printf("Enter three numbers:");
scanf("%d %d %d",&a,&b,&c);
if((a>b)&&(a>c)){
x=a*a+b*b;
}
if(b>c){
y=a*a-b*b;
}
printf("x= %d y= %d",x,y);
getch();
}
The objective of achieving 100% statement coverage is difficult in practice. A portion of the
program may execute in exceptional circumstances and some conditions are rarely possible,
and the affected portion of the program due to such conditions may not execute at all.
If a > b and a > c, then the statement number 7 will be true (first possibility). However, if a
< b, then second condition (a > c) would not be tested and statement number 7 will be false
(third and fourth possibilities). If a > b and a < c, statement number 7 will be false (second
possibility). Hence, we should write test cases for every true and false condition. Selected inputs
may be given as:
(i) a = 9, b = 8, c = 7 (first possibility when both are true)
(ii) a = 9, b = 8, c = 10 (second possibility first is true, second is false)
(iii) a = 7, b = 8, c = 9 (third and fourth possibilities- first is false, statement number 7 is
false)
Hence, these three test cases out of four are sufficient to ensure the execution of every
condition of the program.
168
Software Testing
goal of executing all paths in many programs. If we do so, we may be confident about the
correctness of the program. If it is unachievable, at least all independent paths should be
executed. The program given in Figure 4.1 has four paths as given as:
(i)
(ii)
(iii)
(iv)
Execution of all these paths increases confidence about the correctness of the program.
Inputs for test cases are given as:
Inputs
b
c
8
9
S. No.
Paths Id.
Paths
1.
Path-1
17,10, 1315
a
7
Expected Output
2.
Path-2
17, 1015
x=0 y=15
x=0 y=0
3.
Path-3
110, 1315
x=130 y=0
4.
Path-4
115
x=145 y=17
Some paths are possible from the program graph, but become impossible when we give inputs
as per logic of the program. Hence, some combinations may be found to be impossible to create.
Path testing guarantee statement coverage, branch coverage and condition coverage. However,
there are many paths in any program and it may not be possible to execute all the paths. We
should do enough testing to achieve a reasonable level of coverage. We should execute at least
(minimum level) all independent paths which are also referred to as basis paths to achieve
reasonable coverage. These paths can be found using any method of cyclomatic complexity.
We have to decide our own coverage level before starting control flow testing. As we go up
(statement coverage to path coverage) in the ladder, more resources and time may be required.
Example 4.1: Consider the program for the determination of the division of a student. The
program and its program graph are given in Figure 3.15 and 3.16 of chapter 3 respectively.
Derive test cases so that 100% path coverage is achieved.
Solution:
The test cases are given in Table 4.1.
Table 4.1. Test cases
S. No.
mark1
mark2
mark3
Expected output
Paths
1.
30
20
Invalid marks
114, 33, 34
2.
40
20
45
Fail
3.
45
47
50
Third division
4.
55
60
57
Second division
5.
65
70
75
First division
6.
80
85
90
Example 4.2: Consider the program and program graph given below. Derive test cases so that
100% statement coverage and path coverage is achieved.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
170
Software Testing
Solution:
The test cases to guarantee 100% statement and branch coverage are given in Table 4.2.
Table 4.2. Test cases for statement coverage
S. No.
First name
Address
Expected output
Paths
1.
ashok
E-29, eastofkailash
abc@yahoo.com
2.
ruc
E29
abc
Invalid address length
Invalid email length
3.
ruc
E-29
abc@yahoocom
Invalid address length
Email must contain . and
@ character
(Contd.)
1.
First
name
-
2.
S. No.
Address
Expected output
Paths
135
130, 34,35
3.
4.
ruc
E29
abc
Invalid address length
Invalid email length
5.
6.
ruc
E-29
abc@yahoocom
7.
8.
ruc
E-29
Abs@yahoo.com
125, 3035
125, 30, 31, 34,
35
120, 2535
120, 2531, 34,
35
9.
116, 2035
10.
11.
116, 2025,
3035
12.
ruc
E-29, eastofkailash
Abs
Invalid email length
13.
14.
ruc
E-29, eastofkailash
abc@yahoocom
15.
16.
ruc
E-29, eastofkailash
abc@yahoo.com
17.
112, 1635
18.
112, 1631,
34,35
19.
112, 1625,
3035
20.
ashok
E29
Abc
21.
112, 1620,
2535
(Contd.)
172
Software Testing
(Contd.)
S. No.
First
name
Address
Expected output
Paths
22.
ashok
E29
abc@yahoocom
112, 1620,
2531, 34, 35
23.
24.
ashok
E29
abc@yahoo.com
25.
26.
27.
28.
ashok
E-29, eastofkailash
Abs
29.
30.
ashok
E-29, eastofkailash
Abcyahoo.com
31.
ashok
E-29, eastofkailash
abc@yahoo.com
32.
Example 4.3: Consider the program for classification of a triangle given in Figure 3.10.
Derive test cases so that 100% statement coverage and path coverage is achieved.
Solution:
The test cases to guarantee 100% statement and branch coverage are given in Table 4.4.
Table 4.4. Test cases for statement coverage
S. No.
Expected output
Paths
1.
30
20
40
116,2027,34,41,42
2.
30
40
50
116,2025,2830,34,41,42
3.
40
50
60
16,2025,28,3134,41,42
4.
30
10
15
Invalid triangle
114,1721,3537,41,42
5.
102
50
60
113,21,35,38,39,4042
Expected output
Paths
1.
102
113,21,35,38,39,4042
2.
114,1719,20,21,35,38,39,4042
3.
116,20,21,35,38,39,4042
4.
113,21,35,36,37,41,42
5.
30
10
15
Invalid triangle
114,1721,3537,41,42
6.
116,20,21,3537,41,42
7.
113,2125,28,3134,41,42
8.
114,1725,28,3134,41,42
9.
40
50
60
116,2025,28,3134,41,42
10.
113,2125,2830,34,41,42
11.
114,1725,2830,34,41,42
12.
30
40
50
116,2025,2830,34,41,42
13.
113,2127,34,41,42
14.
114,1727,34,41,42
15.
30
20
40
116,2027,34,41,42
Thus, there are 15 paths, out of which 10 paths are not possible to be executed as per the logic
of the program.
What will be the output? The value of a may be the previous value stored in the memory
location assigned to variable a or a garbage value. If we execute the program, we may get an
unexpected value (garbage value). The mistake is in the usage (reference) of this variable
without first assigning a value to it. We may assume that all variables are automatically
assigned to zero initially. This does not happen always. If we define at line number 4, static
int a, b, c, then all variables are given zero value initially. However, this is a language and
compiler dependent feature and may not be generalized.
174
Software Testing
Data flow testing may help us to minimize such mistakes. It has nothing to do with dataflow diagrams. It is based on variables, their usage and their definition(s) (assignment) in the
program. The main points of concern are:
(i)
(ii)
Data flow testing focuses on variable definition and variable usage. In line number 5 of the
above program, variable a is defined and variables b and c are used. The variables are
defined and used (referenced) throughout the program. Hence, this technique concentrates on
how a variable is defined and used at different places of the program.
We may define a variable, use a variable and redefine a variable. So, a variable must be first
defined before any type of its usage. Define / reference anomalies may be identified by static
analysis of the program i.e. analyzing program without executing it. This technique uses the
program graphs to understand the define / use conditions of all variables. Some terms are
used frequently in data flow testing and such terms are discussed in the next sub-section.
4.2.2 Definitions
A program is first converted into a program graph. As we all know, every statement of a
program is replaced by a node and flow of control by an edge to prepare a program graph.
There may be many paths in the program graph.
(ii)
A node of a program graph is a defining node for a variable , if and only if, the
value of the variable is defined in the statement corresponding to that node. It is
represented as DEF ( , n) where is the variable and n is the node corresponding to
the statement in which is defined.
Usage node
A node of a program graph is a usage node for a variable , if and only if, the value of
the variable is used in the statement corresponding to that node. It is represented as
USE ( , n), where is the variable and n in the node corresponding to the statement
in which is used.
A usage node USE ( , n) is a predicate use node (denoted as P-use), if and only if, the
statement corresponding to node n is a predicate statement otherwise USE ( , n) is a
computation use node (denoted as C-use).
Variable(s)
Used at node
(iii) Generate all du-paths from define/use variable table of step (iii) using the following
format:
S. No.
Variable
du-path(begin, end)
176
Software Testing
(ii)
Variable
Used at node
1.
11, 12, 13
2.
11, 20, 24
3.
10
The du-paths with beginning node and end node are given as:
Variable
6, 11
6, 12
6, 13
8, 11
8, 20
8, 24
10, 12
10, 16
10, 20
10, 21
The first strategy (best) is to test all du-paths, the second is to test all uses and the third is to
test all definitions. The du-paths as per these three strategies are given as:
All
du paths
and
all uses
(Both are same in this
example)
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Here all du-paths and all-uses paths are the same (10 du-paths). But in the 3rd case, for all
definitions, there are three paths.
Test cases are given below:
Test all du-paths
A
Inputs
B
Expected
Output
Remarks
1.
611
2.
612
3.
613
4.
811
5.
811, 19, 20
6.
7.
1012
8.
1012, ,15, 16
9.
10.
A
9
7
8
B
8
9
7
S. No.
Test All
S. No.
1.
2.
3.
Inputs
C
7
8
9
Expected Output
Remarks
9
9
9
611
811
1012
In this example all du-paths and all uses yield the same number of paths. This may not
always be true. If we consider the following graph and find du paths with all three strategies,
we will get a different number of all-du paths and all-uses paths.
178
Software Testing
Variables
Used at node
1.
7, 10
2.
8, 9
Variables
1.
1, 7
1, 10
2.
1, 8
1, 9
14, 6, 7
1, 2, 57
14, 6, 9, 10
1, 2, 5, 6, 9, 10
14, 6, 7, 8
1, 2, 58
14, 6, 9
1, 2, 5, 6, 9
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
(Contd.)
(2 paths)
14, 6, 7
14, 6, 9, 10
14, 6-8
14, 6, 9
Yes
Yes
Yes
Yes
14, 6, 7
14, 68
Yes
Yes
Hence the number of paths is different in all testing strategies. When we find all du-paths,
some paths may become impossible paths. We show them in order to show all combinations.
Example 4.4: Consider the program for the determination of the division problem. Its input is
a triple of positive integers (mark1, mark2, mark3) and values for each of these may be from
interval [0, 100]. The program is given in Figure 3.15. The output may have one of the options
given below:
(i)
(ii)
(iii)
(iv)
(v)
(vi)
Fail
Third division
Second division
First division
First division with distinction
Invalid marks
Find all du-paths and identify those du-paths that are definition clear. Also find all du-paths,
all-uses and all-definitions and generate test cases for these paths.
Solution:
(i)
(ii)
The program graph is given in Figure 3.16. The variables used in the program are
mark1, mark2, mark3, avg.
The define/ use nodes for all variables are given below:
S. No.
Variable
Used at node
1.
mark1
12, 16
2.
mark2
12, 16
3.
mark3
11
12, 16
4.
avg
16
(iii) The du-paths with beginning and ending nodes are given as:
S. No.
Variable
1.
mark1
7, 12
7, 16
2.
mark2
9, 12
9, 16
3.
mark3
11, 12
11, 16
(Contd.)
180
Software Testing
(Contd.)
S. No.
Variable
4.
Avg
16, 17
16, 20
16, 23
16, 26
712
712, 15, 16
912
912, 15, 16
11, 12
11, 12, 15, 16
16, 17
16, 17, 20
16, 17, 20, 23
16, 17, 20, 23, 26
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
712
912
11, 12
16, 17
Yes
Yes
Yes
Yes
Test cases for all du-paths and all-uses are given in Table 4.6 and test cases for all definitions
are given in Table 4.7.
Table 4.6. Test cases for all du-paths and all-uses
S. No.
mark1
mark2
mark3
Expected Output
Remarks
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
101
60
50
60
50
60
30
45
55
65
50
50
101
70
50
75
40
50
60
70
50
40
50
80
101
80
30
50
50
70
Invalid marks
Second division
Invalid marks
First division
Invalid marks
First division
Fail
Third division
Second division
First division
712
712, 15, 16
912
912, 15, 16
11, 12
11, 12, 15, 16
16, 17
16, 17, 20
16, 17, 20, 23
16, 17, 20, 23, 26
Table 4.7.
S. No.
1.
2.
3.
4.
mark1
101
50
50
30
mark2
50
101
50
40
mark3
50
50
101
30
Expected Output
Invalid marks
Invalid marks
Invalid marks
Fail
Remarks
712
912
11, 12
16, 17
Example 4.5: Consider the program of classification of a triangle. Its input is a triple of
positive integers (a, b and c) and values for each of these may be from interval [0, 100]. The
program is given in Figure 3.18. The output may have one of the options given below:
(i)
(ii)
(iii)
(iv)
(v)
Find all du-paths and identify those du-paths that are definition clear. Also find all du-paths,
all-uses and all definitions and generate test cases from them.
Solution:
(i)
(ii)
The program graph is given in Figure 3.19. The variables used are a, b, c, a1, a2, a3,
valid.
Define / use nodes for all variables are given below:
S. No.
Variable
1.
node
2.
10
3.
12
4.
a1
22
25. 28
5.
a2
23
25, 28
6.
a3
24
25, 28
7.
valid
5, 15, 18
21, 35
Used at node
13, 14, 22, 23, 24
(iii) The du-paths with beginning and ending nodes are given as:
S. No.
Variable
1.
2.
3.
4.
a1
8, 13
8, 14
8, 22
8, 23
8, 24
10, 13
10, 14
10, 22
10, 23
10, 24
12, 13
12, 14
12, 22
12, 23
12, 24
22. 25
22, 28
(Contd.)
182
Software Testing
(Contd.)
S. No.
Variable
5.
a2
6.
a3
7.
Valid
23, 25
23, 28
24, 25
24, 28
5, 21
5, 35
15, 21
15, 35
18, 21
18, 35
All du-paths are given in Table 4.8 and the test cases for all du-paths are given in Table 4.9.
Table 4.8. All du-paths
All du-paths
All du paths
813
Yes
1214, 1722
Yes
814
Yes
Yes
816, 2022
Yes
1216, 2023
Yes
814, 1722
Yes
1214, 1723
Yes
813, 21,22
Yes
Yes
816, 2023
Yes
1216, 2024
Yes
814, 1723
Yes
1214, 1724
Yes
813, 2123
Yes
Yes
816, 2024
Yes
2225
Yes
814, 1724
Yes
2225, 28
Yes
813, 2124
Yes
2325
Yes
1013
Yes
2325, 28
Yes
1014
Yes
24, 25
Yes
1016, 2022
Yes
24, 25, 28
Yes
1014, 1722
Yes
516, 20, 21
No
1013, 21,22
Yes
514, 1721
No
1016, 2023
Yes
513, 21
Yes
1014, 1723
Yes
No
1013, 2123
Yes
514, 1721, 35
No
1016, 2024
Yes
513, 21, 35
Yes
1014, 1724
Yes
Yes
1013, 2124
Yes
Yes
12, 13
Yes
1821
Yes
1214
Yes
1821, 35
Yes
1216, 2022
Yes
We consider all combinations for the design of du-paths. In this process, test cases
corresponding to some paths are not possible, but these paths are shown in the list of all
du-paths. They may be considered only for completion purpose.
Table 4.9. Test cases for all du-paths
S. No.
Expected output
Remarks
1.
30
20
40
813
2.
30
20
40
814
3.
30
20
40
816, 2022
4.
814, 1722
5.
813, 21,22
6.
30
20
40
816, 2023
7.
814, 1723
8.
813, 2123
9.
30
20
40
816, 2024
10.
814, 1724
11.
813, 2124
12.
30
20
40
1013
13.
30
20
40
1014
14.
30
20
40
1016, 2022
15.
1014, 1722
16.
1013, 21,22
17.
30
20
40
1016, 2023
18.
1014, 1723
19.
1013, 2123
20.
30
20
40
1016, 2024
21.
1014, 1724
22.
1013, 2124
23.
30
20
40
12, 13
24.
30
20
40
1214
25.
30
20
40
1216, 2022
26.
1214, 1722
27.
28.
30
20
40
1216, 2023
29.
1214, 1723
30.
31.
30
20
40
1216, 2024
32.
1214, 1724
33.
34.
30
20
40
2225
(Contd.)
184
Software Testing
(Contd.)
S. No.
Expected output
Remarks
35.
30
40
50
2225, 28
36.
30
20
40
2325
37.
30
40
50
2325, 28
38.
30
20
40
24, 25
39.
30
40
50
24, 25, 28
40.
30
20
40
516, 20, 21
41.
30
10
15
Invalid triangle
514, 1721
42.
102
513, 21
43.
44.
30
10
15
Invalid triangle
514, 1721, 35
45.
102
-1
513, 21, 35
46.
30
20
40
47.
48.
30
10
15
Invalid triangle
1821
49.
30
10
15
Invalid triangle
1821, 35
The all-uses paths are given in Table 4.10 and the test cases for all du-paths are given in
Table 4.11. The all-definitions paths and the test cases are given in Tables 4.12 and 4.13
respectively.
Table 4.10.
All uses
All uses
813
Yes
1216, 2024
Yes
814
Yes
2225
Yes
816, 2022
Yes
2225, 28
Yes
816, 2023
Yes
2325
Yes
816, 2024
Yes
2325, 28
Yes
1013
Yes
24, 25
Yes
1014
Yes
24, 25, 28
Yes
1016, 2022
Yes
516, 20, 21
No
1013, 2123
Yes
514, 1721, 35
No
1016, 2024
Yes
Yes
12,13
Yes
Yes
1214
Yes
1821
Yes
Yes
1821, 35
Yes
1216, 2023
Yes
Expected output
Remarks
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
40
20
40
20
40
20
10
20
10
10
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
50
40
50
40
50
40
15
40
15
15
813
814
816, 2022
816, 2023
816, 2024
1013
1014
1016, 2022
1013, 2123
1016, 2024
12,13
1214
1216, 20, 21, 22
1216, 2023
1216, 2024
2225
2225, 28
2325
2325, 28
24, 25
24, 25, 28
516, 20, 21
514, 1721, 35
15, 16, 20, 21
15, 16, 20, 21, 35
1821
1821, 35
Table 4.12.
813
Yes
1013
Yes
12, 13
Yes
2225
Yes
2325
Yes
24,25
Yes
516, 20, 21
No
Yes
1821
Yes
186
Software Testing
Table 4.13.
S. No.
Expected output
Remarks
1.
2.
3.
4.
5.
6.
7.
8.
9.
30
30
30
30
30
30
30
30
30
20
20
20
20
20
20
20
20
10
40
40
40
40
40
40
40
40
15
813
1013
12, 13
2225
2325
24,25
516, 20, 21
15, 16, 20, 21
1821
Example 4.6: Consider the program given in Figure 3.21 for the determination of day of the
week. Its input is at triple of positive integers (day, month, year) from the interval
1 day 31
1 month 12
1900 year 2058
The output may be:
[Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday]
Find all du-paths and identify those du-paths that are definition clear. Also find all du-paths,
all-uses and all-definitions and generate test cases for these paths.
Solution:
(i) The program graph is given in Figure 3.22. The variables used in the program are day,
month, year, century, Y, Y1, M, date, validDate, leap.
(ii) Define / use nodes for all variables are given below:
S. No.
Variable
Used at node
1.
Day
2.
Month
3.
Year
10
4.
Century
46, 50
91
5.
53
91
6.
Y1
47, 51
53
(Contd.)
Variable
Used at node
7.
56, 59, 64
67, 71, 74
77, 80, 83
86, 89
91
8.
Date
91
9.
ValidDate
3, 20, 23
28, 31, 34,
38, 41
44
10.
Leap
3, 13, 15
27, 55, 63
(iii) The du-paths with beginning and ending nodes are given as:
S. No.
Variable
1.
Day
2.
Month
6, 19
6, 27
6, 30
6, 37
6, 91
6, 93
6, 96
6, 99
6, 102
6, 105
6, 108
6, 111
6, 115
8, 18
8, 26
8, 37
8, 54
8, 62
8, 70
8, 73
8, 76
8, 79
8, 82
8, 85
8, 93
8, 96
8, 99
8, 102
8, 105
8, 108
8, 111
8, 115
(Contd.)
188
Software Testing
(Contd.)
S. No.
Variable
3.
Year
4.
Century
5.
6.
Y
Y1
7.
8.
Date
9.
ValidDate
10, 11
10, 12
10, 14
10, 45
10, 47
10, 51
10, 93
10, 96
10, 99
10, 102
10, 105
10, 108
10, 111
10, 115
46, 91
50, 91
53, 91
47, 53
51, 53
56, 91
59, 91
64, 91
67, 91
71, 91
74, 91
77, 91
80, 91
83, 91
86, 91
89, 91
91, 92
91, 95
91, 98
91, 101
91, 104
91, 107
3, 44
20, 44
23, 44
28, 44
31, 44
34, 44
38, 44
41, 44
(Contd.)
Variable
10.
Leap
3, 27
3, 55
3, 63
13, 27
13, 55
13, 63
15, 27
15, 55
15, 63
There are more than 10,000 du-paths and it is neither possible nor desirable to show all of
them. The all uses paths and their respective test cases are shown in Table 4.14 and Table 4.15
respectively. The all definitions paths are shown in Table 4.16 and their corresponding test
cases are given in Table 4.17.
Table 4.14.
All uses
619
Yes
618, 26, 27
Yes
Yes
618, 26, 37
Yes
621, 25, 4348, 53, 54, 62, 70, 73, 76, 7981, 91
Yes
621, 25, 4348, 53, 54, 62, 70, 73, 76, 7981, 9193
Yes
621, 25, 4348, 53, 54, 62, 70, 73, 76, 7981, 91, 92, 95, 96
Yes
621, 25, 4348, 53, 54, 62, 70, 73, 76, 7981, 91, 92, 95, 98, 99
Yes
621, 25, 4348, 53, 54, 62, 70, 73, 76, 7981, 91, 92, 95, 98, 101, 102
Yes
621, 4348, 53, 54, 62, 70, 73, 76, 7981, 91, 92, 95, 98, 101, 104, 105
Yes
621, 25, 4348, 53, 54, 62, 70, 73, 76, 7981, 91, 92, 95, 98, 101,104, 107, 108
Yes
621, 25, 4348, 53, 54, 62, 70, 73, 76, 7981, 91, 92, 95, 98, 101, 104, 107,
110, 111
Yes
Yes
818
Yes
818, 26
Yes
818, 26, 37
Yes
Yes
Yes
Yes
Yes
Yes
(Contd.)
190
Software Testing
(Contd.)
All uses
821, 25, 4348, 53, 54, 62, 70, 73, 76, 79
Yes
821, 25, 4348, 53, 54, 62, 70, 73, 76, 79, 82
Yes
821, 25, 4348, 53, 54, 62, 70, 73, 76, 79, 82, 85
Yes
821, 25, 4348, 53, 54, 62, 70, 73, 76, 7981, 91, 92, 93
Yes
821, 25, 4348, 53, 54, 62, 70, 73, 76, 7981, 91, 92, 95, 96
Yes
821, 25, 4348, 53, 54, 62, 70, 73, 76, 7981, 91, 92, 95, 98, 99
Yes
821, 25, 4348, 53, 54, 62, 70, 73, 76, 7981, 91, 92, 95, 98, 101, 102
Yes
821, 25, 4348, 53, 54, 62, 70, 73, 76, 7981, 91, 92, 95, 98, 101, 104, 105
Yes
821, 25, 4348, 53, 54, 62, 70, 73, 76, 7981, 91, 92, 95, 98, 101, 104, 107, 108
Yes
821, 25, 4348, 53, 54, 62, 70, 73, 76, 7981, 91, 92, 95, 98, 101, 104, 107,
110, 111
Yes
Yes
10, 11
Yes
1012
Yes
1014
Yes
Yes
Yes
Yes
1021, 25, 4348, 53, 54, 62, 70, 73, 76, 7981, 9193
Yes
1021, 25, 4348, 53, 54, 62, 70, 73, 76, 7981, 91, 92, 95, 96
Yes
1021, 25, 4348, 53, 54, 62, 70, 73, 76, 7981, 91, 92, 95, 98, 99
Yes
1021, 25, 4348, 53, 54, 62, 70, 73, 76, 7981, 91, 92, 95, 98, 101, 102
Yes
1021, 25, 4348, 53, 54, 62, 70, 73, 76, 7981, 91, 92, 95, 98, 101, 104, 105
Yes
1021, 25, 4348, 53, 54, 62, 70, 73, 76, 7981, 91, 92, 95, 98, 101, 104, 107,
108
Yes
1021, 25, 4348, 53, 54, 62, 70, 73, 76, 7981, 91, 92, 95, 98, 101, 104, 107,
110, 111
Yes
Yes
Yes
5057, 61, 91
Yes
5361, 91
Yes
47, 48, 53
Yes
5153
Yes
Yes
5961, 91
Yes
(Contd.)
Yes
6769, 91
Yes
71, 72, 91
Yes
74, 75, 91
Yes
77, 78, 91
Yes
80, 81, 91
Yes
83, 84, 91
Yes
86, 87, 91
Yes
89, 90, 91
Yes
91, 92
Yes
91, 92, 95
Yes
Yes
Yes
Yes
Yes
311, 44
No
Yes
2325, 43, 44
Yes
Yes
Yes
3436, 43, 44
Yes
Yes
4144
Yes
318, 26, 27
No
No
No
1318, 26, 27
No
No
No
1518, 26, 27
Yes
Yes
Yes
192
Software Testing
Month
Day
Year
Expected output
Remarks
1.
15
1900
Friday
619
2.
15
1900
Thursday
618, 26, 27
3.
15
1900
Thursday
4.
15
1900
Sunday
618, 26, 37
5.
15
1900
Friday
6.
10
1900
Sunday
7.
11
1900
Monday
8.
12
1900
Tuesday
9.
13
1900
Wednesday
10.
14
1900
Thursday
11.
15
1900
Friday
12.
16
1900
Saturday
13.
15
2059
Invalid Date
14.
15
1900
Friday
818
15.
15
1900
Thursday
818, 26
16.
15
1900
Monday
818, 26, 37
17.
15
1900
Friday
18.
15
1900
Friday
19.
15
1900
Friday
20.
15
1900
Sunday
21.
15
1900
Friday
22.
15
1900
Friday
23.
15
1900
Saturday
Month
Day
Year
Expected output
Remarks
24.
15
1900
Saturday
25.
10
1900
Sunday
26.
11
1900
Monday
27.
12
1900
Tuesday
28.
13
1900
Wednesday
29.
14
1900
Thursday
30.
15
1900
Friday
31.
16
1900
Saturday
32.
15
2059
Invalid Date
33.
15
1900
Friday
10, 11
34.
15
1900
Friday
1012
35.
15
1900
Friday
1014
36.
15
1900
Friday
37.
15
1900
Friday
38.
15
2009
Monday
39.
10
1900
Sunday
40.
11
1900
Monday
41.
12
1900
Tuesday
42.
13
1900
Wednesday
43.
14
1900
Thursday
44.
15
1900
Friday
194
Software Testing
(Contd.)
S. No.
Month
Day
Year
Expected output
Remarks
45.
16
1900
Saturday
46.
15
2059
Invalid Date
47.
15
1900
Monday
48.
15
2009
Thursday
5057, 61, 91
49.
15
2009
Thursday
5361, 91
50.
15
1900
Friday
47, 48, 53
51.
15
2009
Monday
5153
52.
15
2009
Thursday
53.
15
2000
Saturday
5961, 91
54.
15
2009
Thursday
55.
15
2000
Tuesday
6769, 91
56.
15
2009
Sunday
71, 72, 91
57.
15
2009
Wednesday
74, 75, 91
58.
15
2009
Friday
77, 78, 91
59.
15
2009
Monday
80, 81, 91
60.
15
2009
Saturday
83, 84, 91
61.
15
2009
Tuesday
86, 87, 91
62.
15
2009
Wednesday
89, 90, 91
63.
2009
Sunday
91, 92
64.
2009
Monday
91, 92, 95
65.
2009
Tuesday
66.
2009
Wednesday
67.
2009
Thursday
68.
10
2009
Friday
69.
15
1900
Friday
311, 44
70.
15
1900
Friday
71.
31
2009
Invalid Date
2325, 43, 44
(Contd.)
Month
Day
Year
Expected output
Remarks
72.
15
2000
Tuesday
73.
15
2009
Sunday
74.
30
2009
Invalid Date
3436, 43, 44
75.
15
2009
Saturday
38,39, 43, 44
76.
13
2009
Invalid Date
4144
77.
15
1900
Thursday
318, 26, 27
78.
15
1900
Monday
79.
15
1900
Thursday
80.
15
1900
Thursday
1318, 26, 27
81.
15
1900
Monday
82.
15
1900
Thursday
83.
15
1900
Thursday
1518, 26, 27
84.
15
1900
Monday
85.
15
1900
Thursday
Table 4.16.
619
Yes
818
Yes
10, 11
Yes
Yes
5057, 61, 91
Yes
5357, 61, 91
Yes
47, 48, 53
Yes
5153
Yes
Yes
Yes
Yes
6769, 91
Yes
71, 72, 91
Yes
74, 75, 91
Yes
(Contd.)
196
Software Testing
(Contd.)
77, 78, 91
Yes
80, 81, 91
Yes
83, 84, 91
Yes
86, 87, 91
Yes
8991
Yes
91, 92
Yes
311, 44
No
Yes
2325, 43, 44
Yes
Yes
Yes
3436, 43, 44
Yes
Yes
4144
Yes
318, 26, 27
No
1318, 26, 27
No
1518, 26, 27
Yes
Table 4.17.
S. No.
Month
Day
Year
Expected output
Remarks
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
6
6
6
1
1
1
6
6
1
1
1
2
3
4
5
6
8
15
15
15
15
15
15
15
15
15
15
15
15
15
15
15
15
15
1900
1900
1900
1900
2009
2009
1900
2009
2009
2000
2009
2000
2009
2009
2009
2009
2009
Friday
Friday
Friday
Monday
Thursday
Thursday
Friday
Monday
Thursday
Saturday
Thursday
Tuesday
Sunday
Wednesday
Friday
Monday
Saturday
619
818
10, 11
4648, 5357, 61, 91
5057, 61, 91
5357, 61, 91
47, 48, 53
5153
56, 57, 61, 91
59, 60, 61, 91
64, 65, 69, 91
6769, 91
71, 72, 91
74, 75, 91
77, 78, 91
80, 81, 91
83, 84, 91
(Contd.)
Month
Day
Year
Expected output
Remarks
18.
15
2009
Tuesday
86, 87, 91
19.
15
2009
Wednesday
8991
20.
15
2009
Monday
91, 92
21.
15
2059
Invalid Date
311, 44
22.
15
1900
Friday
23.
31
2009
Invalid Date
2325, 43, 44
24.
15
2000
Tuesday
25.
15
2009
Sunday
26.
30
2009
Invalid Date
3436, 43, 44
27.
15
2009
Saturday
28.
13
2009
Invalid Date
4144
29.
15
1900
Thursday
318, 26, 27
30.
15
1900
Thursday
1318, 26, 27
31.
15
1900
Thursday
1518, 26, 27
198
2.
3.
4.
5.
Software Testing
5 int valid = 0
15 valid = 1
18 valid = 1
Hence, we may create S(valid, 5), S(valid, 15) and S(valid, 18) slices for variable valid
of the program.
All statements where variables receive values externally should be considered. Consider
the triangle problem (given in Figure 3.18) where variables a, b and c receive values
externally at line number 8, line number 10 and line number 12 respectively as shown
below:
8 scanf (%lf, &a);
10 scanf (%lf, &b);
12 scanf (%lf, &c);
Hence, we may create S(a, 8), S(b, 10) and S(c, 12) slices for these variables.
All statements where output of a variable is printed should be considered. Consider the
program to find the largest amongst three numbers (given in Figure 3.11) where variable
C is printed at line number 16 and 21 as given below:
16 printf (The largest number is: % f \n, C);
21 printf (The largest number is: % f \n, C)
Hence, we may create S(C, 16) and S(C, 21) as slices for C variable
All statements where some relevant output is printed should be considered. Consider
the triangle classification program where line number 26, 29, 32, 36 and 39 are used for
printing the classification of the triangle (given in Figure 3.18) which is very relevant as
per logic of the program. The statements are given as:
26 printf (Obtuse angled triangle);
29 printf (Right angled triangle);
32 printf (Acute angled triangle);
36 printf (\nInvalid triangle);
39 printf (\nInput Values out of Range);
We may create S(a1, 26), S(a1, 29), S(a1, 32), S(valid, 36) and S(valid, 39) as slices.
These are important slices for the purpose of testing.
The status of all variables may be considered at the last statement of the program. We
consider the triangle classification program (given in figure 3.18) where line number 42 is
the last statement of the program. We may create S(a1, 42), S(a2, 42), S(a3, 42), S(valid,
42), S(a, 42), S(b,42) and S(c, 42) as slices.
We identify two slices for variable c at statement number 3 and statement number 5 as
given in Figure 4.3.
1.
3;
2.
6;
2.
6;
3.
b2;
5.
a + b;
S(c, 5)
Variable c at statement 5
S(c, 3)
Variable c at statement 5
Many slices may be created as per criterion (mentioned in section 4.3.1) of the program
given in the Figure 4.4. Some of these slices are shown below:
1. main ( )
2. {
3. int a, b, c, d, e;
4. printf (Enter the values of a, b and c \ n);
5. scanf (%d %d %d, &a, &b, &c);
7. e = b + c;
9. printf (%d, e);
10. }
200
Software Testing
1. main ( )
2. {
3. int a, b, c, d, e;
4. printf (Enter the values of a, b and c \ n);
5. scanf (%d %d %d, &a, &b, &c);
6. d = a + b;
8. printf (%d, d);
10. }
We also consider the program to find the largest number amongst three numbers as given in
Figure 3.11. There are three variables A, B and C in the program. We may create many slices
like S (A, 28), S (B, 28), S (C, 28) which are given in Figure 4.8.
Some other slices and the portions of the program covered by these slices are given as:
S (A, 6) = {1 6, 28}
S (A, 13) = {114, 18, 27, 28}
S (B, 8) = {1 4, 7, 8, 28}
S (B, 24) = {111, 1820, 2228}
S (C, 10) = {1 4, 9, 10, 28}
S (C, 16) = {112, 1418, 27, 28}
S (C, 21) = {111, 1822, 2628}
It is a good programming practice to create a block even for a single statement. If we
consider C++/C/Java programming languages, every single statement should be covered with
curly braces { }. However, if we do not do so, the compiler will not show any warning / error
message. In the process of generating slices we delete many statements (which are not required
in the slice). It is essential to keep the starting and ending brackets of the block of the deleted
statements. It is also advisable to give a comment do nothing in order to improve the
readability of the source code.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
18.
27.
28.
#include<stdio.h>
#include<conio.h>
void main()
{
oat A,B,C;
clrscr();
printf("Enter number 1:\n");
scanf("%f", &A);
printf("Enter number 2:\n");
scanf("%f", &B);
printf("Enter number 3:\n");
scanf("%f", &C);
if(A>B) {
if(A>C) {
printf("The largest number is: %f\n",A);
}
}
getch();
}
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
18.
19.
20.
22.
23.
24.
25.
26.
27.
28.
#include<stdio.h>
#include<conio.h>
void main()
{
oat A,B,C;
clrscr();
printf("Enter number 1:\n");
scanf("%f", &A);
printf("Enter number 2:\n");
scanf("%f", &B);
printf("Enter number 3:\n");
scanf("%f", &C);
if(A>B) { /*do nothing*/
}
else {
if(C>B) { /*do nothing*/
}
else {
printf("The largest number is: %f\n",B);
}
}
getch();
}
(b) S(B, 28) ={111, 1820, 2228}
(Contd.)
202
Software Testing
(Contd.)
#include<stdio.h>
#include<conio.h>
1.
void main()
2.
3.
oat A,B,C;
4.
clrscr();
5.
6.
scanf("%f", &A);
7.
8.
scanf("%f", &B);
9.
10.
scanf("%f", &C);
11.
18.
19.
else {
20.
if(C>B) {
21.
22.
26.
27.
getch();
28. }
(c) S(C, 28)={111, 1822, 2628}
Figure 4.5. Some slices of program in Figure 3.11
A statement may have many variables. However, only one variable should be used to
generate a slice at a time. Different variables in the same statement will generate a different
program slice. Hence, there may be a number of slices of a program depending upon the slicing
criteria. Every slice is smaller than the original program and can be executed independently.
Each slice may have one or more test cases and may help us to focus on the definition,
redefinition, last statement of the program, and printing/reading of a variable in the slice.
Program slicing has many applications in testing, debugging, program comprehension and
software measurement. A statement may have many variables. We should use only one variable
of a statement for generating a slice.
Table 4.18.
S. No.
Slice
Lines covered
Expected output
1.
S(A, 6)
16, 28
2.
S(A, 13)
3.
S(A, 28)
4.
S(B, 8)
14, 7, 8, 28
5.
S(B, 24)
6.
S(B, 28)
7.
S(C, 10)
14, 9, 10, 28
No output
8.
S(C, 16)
9.
S(C, 21)
10.
S(C, 28)
No output
No output
Slice based testing is a popular structural testing technique and focuses on a portion of the
program with respect to a variable location in any statement of the program. Hence slicing
simplifies the way of testing a programs behaviour with respect to a particular subset of its
variables. But slicing cannot test a behaviour which is not represented by a set of variables or
a variable of the program.
Example 4.7: Consider the program for determination of division of a student. Consider all
variables and generate possible program slices. Design at least one test case from every slice.
Solution:
There are four variables mark1, mark2, mark3 and avg in the program. We may create many
slices as given below:
S (mark1, 7) = {17, 34}
S (mark1, 13) = {114, 33, 34}
S (mark2, 9) = {15, 8, 9, 34}
S (mark2, 13) = {114, 33, 34}
S (mark3, 11) = {15, 10, 11, 34}
S (mark3, 13) = {114, 33, 34}
S (avg, 16)
S (avg, 18)
S (avg, 21)
S (avg, 24)
S (avg, 27)
S (avg, 30)
204
Software Testing
The program slices are given in Figure 4.6 and their corresponding test cases are given in
Table 4.19.
#include<stdio.h>
#include<stdio.h>
#include<conio.h>
#include<conio.h>
1.
void main()
1.
2.
2.
3.
3.
4.
clrscr();
4.
clrscr();
5.
5.
6.
8.
7.
scanf("%d", &mark1);
9.
scanf("%d", &mark2);
34.
34.
(a) S(mark1,7)/S(mark1,34)
void main()
(b) S(mark2,9)/S(mark2,34)
#include<stdio.h>
#include<stdio.h>
#include<conio.h>
#include<conio.h>
1.
void main()
1.
void main()
2.
2.
3.
3.
4.
clrscr();
4.
clrscr();
5.
5.
10.
6.
11.
scanf("%d",&mark3);
7.
scanf("%d", &mark1);
34.
8.
9.
scanf("%d", &mark2);
10.
11.
scanf("%d",&mark3);
12.
if(mark1>100||mark1<0||mark2>100||mark2<0||
mark3>100||mark3<0){
13.
14.
33.
getch();
34.
(c) S(mark3,11)/S(mark3,34)
(d) S(mark1,13)/S(mark2,13)/S(mark3,13)
(Contd.)
#include<stdio.h>
#include<conio.h>
#include<conio.h>
1.
void main()
1.
void main()
2.
2.
3.
3.
4.
clrscr();
4.
clrscr();
5.
5.
6.
6.
7.
scanf("%d", &mark1);
7.
scanf("%d", &mark1);
8.
8.
9.
scanf("%d", &mark2);
9.
scanf("%d", &mark2);
10.
10.
11.
scanf("%d",&mark3);
11.
scanf("%d",&mark3);
12.
if(mark1>100||mark1<0||mark2>100||mark2
<0||mark3>100||mark3<0){ /* do nothing*/
12.
if(mark1>100||mark1<0||mark2>100||mark2
<0||mark3>100||mark3<0){
14.
} /* do nothing*/
14.
15.
else {
15.
else {
16.
avg=(mark1+mark2+mark3)/3;
16.
avg=(mark1+mark2+mark3)/3;
17.
if(avg<40){
17.
if(avg<40){ /* do nothing*/
18.
printf("Fail");
19.
19.
20.
32.
21.
33.
getch();
22.
34.
29.
}
else if(avg>=40&&avg<50) {
printf("Third Division");
}
else { /* do nothing*/
31.
32.
33.
getch();
34.
(e) S(avg,18)
(f) S(avg,21)
#include<stdio.h>
#include<stdio.h>
#include<conio.h>
#include<conio.h>
1.
void main()
1.
void main()
2.
2.
3.
3.
4.
clrscr();
4.
clrscr();
5.
5.
206
Software Testing
(Contd.)
6.
7.
scanf("%d", &mark1);
7.
scanf("%d", &mark1);
8.
8.
9.
scanf("%d", &mark2);
9.
scanf("%d", &mark2);
10.
6.
10.
11.
scanf("%d",&mark3);
11.
scanf("%d",&mark3);
12.
if(mark1>100||mark1<0||mark2>100||mark2
<0||mark3>100||mark3<0) {
/* do nothing*/
12.
if(mark1>100||mark1<0||mark2>100||mar
k2<0||mark3>100||mark3<0) {
/* do nothing*/
14.
14.
15.
else {
16.
avg=(mark1+mark2+mark3)/3;
16.
avg=(mark1+mark2+mark3)/3;
17.
if(avg<40) { /* do nothing*/
17.
if(avg<40) { /* do nothing*/
19.
20.
}
else if(avg>=40&&avg<50) {
/* do nothing*/
22.
23.
15.
}
else if(avg>=50&&avg<60) {
19.
20.
23.
printf("Second Division");
25.
25.
26.
{ /* do nothing*/
27.
28.
else
31.
}
else if(avg>=40&&avg<50) {
/* do nothing*/
22.
24.
29.
else {
32.
29.
33.
getch();
31.
34.
}
else if(avg>=50&&avg<60) {
}
else if(avg>=60&&avg<75) {
printf("First Division");
}
else
{ /* do nothing*/
}
32.
33.
getch();
34.
(g) S(avg,24)
(h) S(avg,27)
#include<stdio.h>
#include<conio.h>
1.
void main()
2.
3.
4.
clrscr();
5.
6.
7.
scanf("%d", &mark1);
8.
(Contd.)
scanf("%d", &mark2);
10.
11.
scanf("%d",&mark3);
12.
if(mark1>100||mark1<0||mark2>100||mark2<0||mark3>100||mark3<0) { /* do nothing*/
14.
15.
else {
16.
avg=(mark1+mark2+mark3)/3;
17.
if(avg<40) { /* do nothing*/
19.
20.
}
else if(avg>=40&&avg<50) {/* do nothing*/
22.
23.
}
else if(avg>=50&&avg<60) {/* do nothing*/
25.
26.
}
else if(avg>=60&&avg<75) {/* do nothing*/
28.
29.
}
else
30.
31.
32.
33.
getch();
34.
}
(i) S(avg,30)/S(avg,34)
Table 4.19.
S.
Slice
No.
Line covered
mark1
1.
S(mark1, 7)
17, 34
65
2.
S(mark1, 13)
114, 33, 34
101
3.
S(mark1, 34 )
17, 34
65
4.
S(mark2, 9)
15, 8, 9, 34
5.
S(mark2, 13)
114, 33, 34
6.
S(mark2, 34)
15, 8, 9, 34
7.
S(mark3, 11)
8.
S(mark3, 13)
114, 33, 34
mark2
mark3
No output
40
50
101
No output
50
65
40
Invalid marks
No output
65
40
Expected output
50
Invalid marks
No output
65
No output
101
Invalid marks
(Contd.)
208
Software Testing
(Contd.)
S.
No.
Slice
Line covered
mark1
9.
S(mark3, 34)
10.
S(avg, 16)
45
11.
S(avg, 18)
12.
S(avg, 21)
13.
mark2
mark3
Expected output
65
No output
50
45
No output
40
30
20
Fail
45
50
45
Third division
S(avg, 24)
55
60
57
Second division
14.
S(avg, 27)
65
67
65
First division
15.
S(avg, 30)
79
80
85
16.
S(avg, 34)
79
80
85
17.
S(avg, 16)
45
50
45
No output
Example 4.8: Consider the program for classification of a triangle. Consider all variables and
generate possible program slices. Design at least one test case from every slice.
Solution:
There are seven variables a, b, c, a1, a2, a3 and valid in the program. We may create
many slices as given below:
i.
S (a, 8)
= {18, 42}
ii. S (b, 10)
= {16, 9, 10, 42}
iii. S (c, 12)
= {16, 11, 12, 42}
iv. S (a1, 22) = {116, 2022, 34, 42}
v.
S (a1, 26) = {116, 2022, 2527, 34, 41, 42}
vi. S (a1, 29) = {116, 2022, 25, 2731, 33, 34, 41, 42}
vii. S (a1, 32) = {116, 2022, 25, 27, 28, 3034, 41, 42}
viii. S (a2, 23) = {116, 20, 21,23, 34, 42}
ix. S (a2, 26) = {116, 20, 21, 23, 2527, 34, 41, 42)
x.
S (a2, 29) = {116, 20, 21, 23, 25, 2731, 33, 34, 41, 42}
xi. S (a2, 32) = {116, 20, 21, 23, 25, 27, 28, 3034, 41, 42}
xii. S (a3, 26) = {116, 20, 21, 2427, 34, 41, 42}
xiii. S (a3, 29) = {116, 20, 21, 24, 25, 2731, 33, 34, 41,42}
xiv. S (a3, 32) = {116, 20, 21, 24, 25, 27, 28, 3034, 41, 42}
xv. S (valid, 5) = {15, 42}
xvi. S (valid, 15) = {116, 20, 42}
xvii. S (valid, 18) = {114, 1620, 42}
xviii. S (valid, 36) = {114, 1620, 21, 3438, 4042}
xix. S (valid, 39) = {113, 20, 21, 34, 35, 3742}
The test cases of the above slices are given in Table 4.20.
Table 4.20.
S.
No.
Slice
Path
1.
S(a, 8)/S(a,42)
18, 42
20
2.
S(b, 10)/S(b,42)
16, 9, 10, 42
3.
S(c, 12)/S(c,42)
4.
S(a1, 22)
30
5.
S(a1, 26)
6.
S(a1, 29)
7.
Expected output
No output
20
No output
20
No output
20
40
No output
30
20
40
Obtuse angled
triangle
30
40
50
S(a1, 32)
50
60
40
8.
S(a1, 42)
30
20
40
No output
9.
S(a2, 23)
30
20
40
No output
10.
S(a2, 26)
40
30
20
Obtuse angled
triangle
11.
S(a2, 29)
50
40
30
12.
S(a2, 32)
40
50
60
13.
S(a2, 42)
30
20
40
No output
14.
S(a3, 24)
30
20
40
No output
15.
S(a3, 26)
20
40
30
Obtuse angled
triangle
16.
S(a3, 29)
40
50
30
17.
S(a3, 32)
50
40
60
18.
S(a3, 42)
30
20
40
No output
19.
S(valid,5)
12, 5, 42
20.
S(valid,15)
116, 20, 42
20
40
30
No output
21.
S(valid,18)
114, 1620, 42
30
10
15
No output
22.
S(valid,36)
30
10
15
Invalid triangle
23.
S(valid,39)
102
24.
S(valid,42)
114, 1620, 42
30
10
15
No output
No output
210
Software Testing
Example 4.9. Consider the program for determination of day of the week given in Figure 3.13.
Consider variables day, validDate, leap and generate possible program slices. Design at least
one test case from each slice.
Solution:
There are ten variables day, month, year, century Y, Y1, M, date, valid date, and leap. We
may create many slices for variables day, validDate and leap as given below:
1.
S(day, 6)
{16, 118}
2.
S(day, 93)
{111, 1821, 25, 4348, 53, 54, 61, 62, 69, 70, 72, 73, 75, 76,
7881, 88, 9094, 113, 117, 118}
3.
S(day, 96)
{111, 1821, 25, 4348, 53, 54, 61, 62, 69, 70, 72, 73, 75, 76,
7881, 88, 9092, 9497, 110, 112, 113, 117, 118}
4.
S(day, 99)
{111, 1821, 25, 4348, 53, 54, 61, 62, 69, 70, 72, 73, 75, 76,
7881, 88, 9092, 94, 95, 97100, 110, 112, 113, 117, 118}
5.
S(day, 102)
{111, 1821, 25, 4348, 53, 54, 61, 62, 69, 70, 72, 73, 75, 76,
7881, 88, 9092, 94, 95, 97, 98, 100103, 110, 112, 113, 117,
118}
6.
S(day, 105)
{111, 1821, 25, 4348, 53, 54, 61, 62, 69, 70, 72, 73, 75, 76,
7881, 88, 9092, 94, 95, 97, 98, 100, 101, 103106, 110, 112,
113, 117, 118}
7.
S(day, 108)
{111, 1821, 25, 4348, 53, 54, 61, 62, 69, 70, 72, 73, 75, 76,
7881, 88, 9092, 94, 95, 97, 98, 100, 101, 103, 104, 106110,
112, 113, 117, 118}
8.
S(day, 111)
{111, 1821, 25, 4348, 53, 54, 61, 62, 69, 70, 72, 73, 75, 76,
7881, 88, 9092, 94, 95, 97, 98, 100, 101, 103, 104, 106, 107,
109113, 117, 118}
9.
S(day, 115)
10.
S(day, 118)
{16, 118}
11.
S(validDate,3)
{13, 118}
12.
S(validDate,20)
13.
S(validDate,23)
14.
S(validDate,28)
{113, 17, 18, 25, 2629, 36, 40, 42, 43, 118}
15.
S(validDate,31)
{111, 18, 25, 26, 27, 2933, 35, 36, 40, 42, 43, 118}
16.
S(validDate,34)
{111, 18, 25, 26, 27, 29, 30, 3236, 40, 42, 43, 118}
17..
S(validDate,38)
18.
S(validDate,41)
19.
S(validDate,118)
20.
S(leap,3)
{13, 118}
21.
S(leap,13)
22.
S(leap,15)
23.
S(leap,118)
The test cases for the above slices are given in Table 4.21.
Table 4.21.
S.
No.
1.
Slice
Lines covered
Month
Day
Year
S(day, 6)
16, 118
Expected
output
No output
2.
S(day, 93)
13
1999
Sunday
3.
S(day, 96)
14
1999
Monday
4.
S(day, 99)
15
1999
Tuesday
5.
S(day, 102)
16
1999
Wednesday
6.
S(day, 105)
17
1999
Thursday
7.
S(day, 108)
18
1999
Friday
8.
S(day, 111)
19
1999
Saturday
9.
S(day, 115)
31
2059
Invalid Date
10.
S(day, 118)
16, 118
19
1999
Saturday
11.
S(validDate,3)
13, 118
No output
12.
S(validDate,20)
15
2009
No output
13.
S(validDate,23)
31
2009
No output
14.
S(validDate,28)
15
2000
No output
(Contd.)
212
Software Testing
(Contd.)
S.
No.
Slice
Lines covered
Month
Day
Year
Expected
output
15.
S(validDate,31)
15
2009
No output
16.
S(validDate,34)
29
2009
No output
17.
S(validDate,38)
15
2009
No output
18.
S(validDate,41)
13
15
2009
No output
19.
S(validDate,118)
13
15
2009
No output
20.
S(leap,3)
13, 118
No output
21.
S(leap,13)
15
2000
No output
22.
S(leap,15)
15
1900
No output
23.
S(leap,118)
15
1900
No output
The test suite is effective but hardly any errors are there in the program. How will a test
suite detect errors when they are not there?
The test suite is not effective and could not find any errors. Although there may be
errors, they could not be detected due to poor selection of test suite. How will errors be
detected when the test suite is not effective?
In both the cases, we are not able to find errors, but the reasons are different. In the first
case, the program quality is good and the test suite is effective and in the second case, the
program quality is not that good and the test suite is also not that effective. When the test suite
is not able to detect errors, how do we know whether the test suite is not effective or the
program quality is good? Hence, assessing the effectiveness and quality of a test suite is very
important. Mutation testing may help us to assess the effectiveness of a test suite and may also
enhance the test suite, if it is not adequate for a program.
mutant of the original program. The behaviour of the mutant may be different from the original
program due to the introduction of a change. However, the original program and mutant are
syntactically correct and should compile correctly. To mutate a program means to change a
program. We generally make only one or two changes in order to assess the effectiveness of
the selected test suite. We may make many mutants of a program by making small changes in
the program. Every mutant will have a different change in a program. Consider a program to
find the largest amongst three numbers as given in Figure 3.11 and its two mutants are given
in Figure 4.7 and Figure 4.8. Every change of a program may give a different output as
compared to the original program.
Many changes can be made in the program given in Figure 3.11 till it is syntactically
correct. Mutant M1 is obtained by replacing the operator > of line number 11 by the operator
=. Mutant M2 is obtained by changing the operator > of line number 20 to operator <.
These changes are simple changes. Only one change has been made in the original program to
obtain mutant M1 and mutant M2.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
#include<stdio.h>
#include<conio.h>
void main()
{
oat A,B,C;
clrscr();
printf("Enter number 1:\n");
scanf("%f", &A);
printf("Enter number 2:\n");
scanf("%f", &B);
printf("Enter number 3:\n");
scanf("%f", &C);
/*Check for greatest of three numbers*/
if(A>B){
if(A>C) {
else {
printf("The largest number is: %f\n",C);
}
}
else {
if(C>B) {
printf("The largest number is: %f\n",C);
}
else {
printf("The largest number is: %f\n",B);
}
}
(Contd.)
214
Software Testing
(Contd.)
27.
28.
getch();
}
M1 : First order mutant
Figure 4.7. Mutant1 (M1) of program to find the largest among three numbers
#include<stdio.h>
#include<conio.h>
1.
void main()
2.
3.
oat A,B,C;
4.
clrscr();
5.
6.
scanf("%f", &A);
7.
8.
scanf("%f", &B);
9.
10.
scanf("%f", &C);
/*Check for greatest of three numbers*/
11.
if(A>B) {
12.
if(A>C) {
13.
14.
15.
}
else {
16.
17.
18.
19.
else {
20.
21.
if(C>B) {
22.
23.
else {
24.
25.
26.
27.
getch();
28.
}
M2 : First order mutant
Figure 4.8. Mutant2 (M2) of program to find the largest among three numbers
The mutants generated by making only one change are known as first order mutants. We may
obtain second order mutants by making two simple changes in the program and third order mutants
by making three simple changes, and so on. The second order mutant (M3) of the program given in
Figure 3.11 is obtained by making two changes in the program and thus changing operator > of
line number 11 to operator < and operator > of line number 20 to as given in Figure 4.9. The
second order mutants and above are called higher order mutants. Generally, in practice, we prefer
to use only first order mutants in order to simplify the process of mutation.
#include<stdio.h>
#include<conio.h>
1.
void main()
2.
3.
oat A,B,C;
4.
clrscr();
5.
6.
scanf("%f", &A);
7.
8.
scanf("%f", &B);
9.
10.
scanf("%f", &C);
11.
if(A>B) {
12.
if(A>C) {
13.
14.
15.
}
else {
16.
17.
18.
19.
else {
20.
if(C>B) {
21.
22.
23.
else {
24.
25.
26.
27.
getch();
28.
}
M3 : Second order mutant
Figure 4.9. Mutant3 (M3) of program to find the largest among three numbers
216
Software Testing
The results of the program are affected by the change and any test case of the test suite
detects it. If this happens, then the mutant is called a killed mutant.
The results of the program are not affected by the change and any test case of the test
suite does not detect the mutation. The mutant is called a live mutant.
The mutation score associated with a test suite and its mutants is calculated as:
Mutation Score
The total number of mutants is equal to the number of killed mutants plus the number of
live mutants. The mutation score measures how sensitive the program is to the changes and
how accurate the test suite is. A mutation score is always between 0 and 1. A higher value of
mutation score indicates the effectiveness of the test suite although effectiveness also depends
on the types of faults that the mutation operators are designed to represent.
The live mutants are important for us and should be analyzed thoroughly. Why is it that any
test case of the test suite not able to detect the changed behaviour of the program? One of the
reasons may be that the changed statement was not executed by these test cases. If executed,
then also it has no effect on the behaviour of the program. We should write new test cases for
live mutants and kill all these mutants. The test cases that identify the changed behaviour
should be preserved and transferred to the original test suite in order to enhance the capability
of the test suite. Hence, the purpose of mutation testing is not only to assess the capability of
a test suite but also to enhance the test suite. Some mutation testing tools are also available in
the market like Insure++, Jester for Java (open source) and Nester for C++ (open source).
Example 4.10: Consider the program to find the largest of three numbers as given in figure
3.11. The test suite selected by a testing technique is given as:
S. No.
1.
2.
3.
4.
A
6
10
6
6
B
10
6
2
10
C
2
2
10
20
Expected Output
10
10
10
20
Generate five mutants (M1 to M5) and calculate the mutation score of this test suite.
Solution:
The mutated line numbers and changed lines are shown in Table 4.22.
Table 4.22. Mutated statements
Mutant No.
Line no.
Original line
M1
M2
M3
M4
M5
11
11
12
20
16
if(A>B)
if(A>B)
if(A>C)
if(C>B)
printf(The Largest number
is:%f\n,C);
if (A<B)
if(A>(B+C))
if(A<C)
if(C=B)
printf(The Largest number
is:%f\n,B);
The actual output obtained by executing the mutants M1-M5 is shown in Tables 4.23-4.27.
Table 4.23. Actual output of mutant M1
Test case
1.
2.
3.
4.
A
6
10
6
6
B
10
6
2
10
C
2
2
10
20
Expected output
10
10
10
20
Actual output
6
6
10
20
Expected output
Actual output
1.
10
10
10
2.
10
10
10
3.
10
10
10
4.
10
20
20
20
218
Software Testing
Expected output
Actual output
1.
2.
3.
4.
6
10
6
6
10
6
2
10
2
2
10
20
10
10
10
20
10
2
6
20
Expected output
Actual output
1.
2.
3.
4.
6
10
6
6
10
6
2
10
2
2
10
20
10
10
10
20
10
10
10
10
Expected output
Actual output
1.
2.
3.
4.
6
10
6
6
10
6
2
10
2
2
10
20
10
10
10
20
10
10
2
20
Mutation Score
Higher the mutant score, better is the effectiveness of the test suite. The mutant M2 is live
in the example. We may have to write a specific test case to kill this mutant. The additional test
case is given in Table 4.28.
Table 4.28.
Test case
5.
A
10
B
5
C
6
Expected output
10
Now when we execute the test case 5, the actual output will be different from the expected
output (see Table 4.29), hence the mutant will be killed.
Table 4.29. Output of added test case
Test case
5.
A
10
B
5
C
6
Expected output
10
Actual output
6
This test case is very important and should be added to the given test suite. Therefore, the
revised test suite is given in Table 4.30.
Table 4.30.
Test case
1.
2.
3.
4.
5.
A
6
10
6
6
10
B
10
6
2
10
5
C
2
2
10
20
6
Expected output
10
10
10
20
10
Example 4.11: Consider the program for classification of triangle given in Figure 3.18. The
test suite A and B are selected by two different testing techniques and are given in Table 4.31
and Table 4.32, respectively. The five first order mutants and the modified lines are given in
Table 4.33. Calculate the mutation score of each test suite and compare their effectiveness.
Also, add any additional test case, if required.
Table 4.31.
Test case
1.
2.
3.
4.
5.
6.
7.
a
30
30
50
30
1
50
50
b
40
20
40
40
50
150
40
c
90
40
60
50
40
90
1
Expected output
Invalid triangle
Obtuse angled triangle
Acute angled triangle
Right angled triangle
Input values are out of range
Input values are out of range
Input values are out of range
Table 4.32.
Test case
1.
2.
3.
4.
5.
6.
7.
a
40
40
40
30
1
30
30
b
90
30
50
40
50
101
90
c
20
60
60
50
40
90
0
Expected output
Invalid triangle
Obtuse angled triangle
Acute angled triangle
Right angled triangle
Input values are out of range
Input values are out of range
Input values are out of range
Table 4.33.
Mutant
No.
M1
Line
no.
13
M2
M3
M4
M5
14
21
23
25
Original line
if(a>0&&a<=100&&b>0&&b<=10
0&&c>0&&c<=100) {
if((a+b)>c&&(b+c)>a&&(c+a)>b) {
if(valid==1) {
a2=(b*b+c*c)/(a*a);
if(a1<1||a2<1||a3<1) {
if(a>0||a<=100&&b>0&&b<=100&
&c>0&&c<=100) {
if((a+b)>c&&(b+c)>a&&(b+a)>b) {
if(valid>1) {
a2=(b*b+c*c)*(a*a);
if(a1>1||a2<1||a3<1) {
220
Software Testing
Solution:
The actual outputs of mutants M1-M5 on test suite A are shown in Tables 4.34-4.38.
Table 4.34. Actual output of M1(A)
Test case
1.
2.
3.
4.
5.
6.
7.
a
30
30
50
30
1
50
50
b
40
20
40
40
50
150
40
c
90
40
60
50
40
90
1
Expected output
Invalid triangle
Obtuse angled triangle
Acute angled triangle
Right angled triangle
Input values are out of range
Input values are out of range
Input values are out of range
Actual output
Invalid triangle
Obtuse angled triangle
Acute angled triangle
Right angled triangle
Invalid triangle
Invalid triangle
Invalid triangle
Expected output
Actual output
1.
2.
3.
4.
5.
6.
7.
30
30
50
30
1
50
50
40
20
40
40
50
150
40
90
40
60
50
40
90
1
Invalid triangle
Obtuse angled triangle
Acute angled triangle
Right angled triangle
Input values are out of range
Input values are out of range
Input values are out of range
Invalid triangle
Obtuse angled triangle
Acute angled triangle
Right angled triangle
Input values are out of range
Input values are out of range
Input values are out of range
Expected output
Actual output
1.
2.
3.
4.
5.
6.
7.
30
30
50
30
1
50
50
40
20
40
40
50
150
40
90
40
60
50
40
90
1
Invalid triangle
Obtuse angled triangle
Acute angled triangle
Right angled triangle
Input values are out of range
Input values are out of range
Input values are out of range
Invalid triangle
Input values are out of range
Input values are out of range
Input values are out of range
Input values are out of range
Input values are out of range
Input values are out of range
Expected output
Actual output
1.
2.
3.
4.
5.
6.
7.
30
30
50
30
1
50
50
40
20
40
40
50
150
40
90
40
60
50
40
90
1
Invalid triangle
Obtuse angled triangle
Acute angled triangle
Right angled triangle
Input values are out of range
Input values are out of range
Input values are out of range
Invalid triangle
Obtuse angled triangle
Acute angled triangle
Right angled triangle
Input values are out of range
Input values are out of range
Input values are out of range
Expected output
Actual output
1.
30
40
90
Invalid triangle
Invalid triangle
2.
30
20
40
3.
50
40
60
4.
30
40
50
5.
50
40
6.
50
150
90
7.
50
40
Two mutants are M2 and M4 are live. Thus, the mutation score using test suite A is 0.6.
Mutation Score
The actual outputs of mutants M1-M5 on test suite B are shown in Tables 4.39-4.43.
Table 4.39. Actual output of M1
Test case
Expected output
Actual output
1.
40
90
20
Invalid triangle
Invalid triangle
2.
40
30
60
3.
40
50
60
4.
30
40
50
5.
50
40
Invalid triangle
6.
30
101
90
7.
30
90
Invalid triangle
Expected output
Actual output
1.
40
90
20
Invalid triangle
2.
40
30
60
3.
40
50
60
4.
30
40
50
5.
50
40
6.
30
101
90
7.
30
90
222
Software Testing
Expected output
Actual output
1.
2.
3.
4.
5.
6.
7.
40
40
40
30
1
30
30
90
30
50
40
50
101
90
20
60
60
50
40
90
0
Invalid triangle
Obtuse angled triangle
Acute angled triangle
Right angled triangle
Input values are out of range
Input values are out of range
Input values are out of range
Invalid triangle
Input values are out of range
Input values are out of range
Input values are out of range
Input values are out of range
Input values are out of range
Input values are out of range
Expected output
Actual output
1.
2.
3.
4.
5.
6.
7.
40
40
40
30
1
30
30
90
30
50
40
50
101
90
20
60
60
50
40
90
0
Invalid triangle
Obtuse angled triangle
Acute angled triangle
Right angled triangle
Input values are out of range
Input values are out of range
Input values are out of range
Invalid triangle
Obtuse angled triangle
Acute angled triangle
Right angled triangle
Input values are out of range
Input values are out of range
Input values are out of range
a
40
40
40
30
1
30
30
b
90
30
50
40
50
101
90
c
20
60
60
50
40
90
0
Expected output
Invalid triangle
Obtuse angled triangle
Acute angled triangle
Right angled triangle
Input values are out of range
Input values are out of range
Input values are out of range
Mutation Score
Actual output
Invalid triangle
Acute angled triangle
Obtuse angled triangle
Right angled triangle
Input values are out of range
Input values are out of range
Input values are out of range
The mutation score of Test suite B is higher as compared to the mutation score of test suite
A, hence test suite B is more effective in comparison to test suite A. In order to kill the live
mutant (M4), an additional test case should be added to test suite B as shown in Table 4.44.
Table 4.44.
Test case
Expected output
8.
40
30
20
Expected output
1.
40
90
20
Invalid triangle
2.
40
30
60
3.
40
50
60
4.
30
40
50
5.
50
40
6.
30
101
90
7.
30
90
8.
40
30
20
224
Software Testing
226
Software Testing
EXERCISES
4.1 What is structural testing? How is it different from functional testing?
4.2 What are different types of structural testing techniques? Discuss any two techniques
with the help of examples.
4.3 Discuss the significance of path testing. How can we make it more effective?
4.4 Show with the help of an example that a very high level of statement coverage does
not mean that the program is defect-free.
4.5 Write a program to find roots of a quadratic equation.
(a) Draw program graph, DD path graph. Also find independent paths and generate
test cases.
(b) Find all du-paths and identify those du-paths that are not dc paths. Write test cases
for every du-path.
4.6 Explain define/use testing. Consider the NextDate function and write a program in C
language. Find all du paths and dc paths. Design test cases for every definition to every
usage.
4.7 Consider a program for classification of a triangle. Its input is a triple of positive integers
(say a, b and c) from interval [1, 100]. The output may be one of the following:
[Scalene, Isosceles, Equilateral, Not a triangle, invalid inputs]
(a) Draw a program graph, DD path graph and write test cases for every independent
path.
(b) Find all du-paths and identify those du-paths that are not dc paths. Write test cases
for every du-path.
4.8 What is slice based testing? How can it improve testing? Explain the concept with the
help of an example and write test cases accordingly.
4.9 What is mutation testing? What is the purpose of mutation score? Why are higher order
mutants not preferred?
4.10 Differentiate between black box and white box testing. Consider a program to find the
largest number amongst three numbers. Generate test cases using one black box testing
and one white box testing technique.
4.11 How is data flow testing performed? Is it possible to design data flow test cases
manually? Justify your answer.
4.12 What do you mean by a program graph? What is its use? How can we use it in the
design of du-paths?
4.13 Write a program to print the grade of a student according to the following criteria:
(i) marks > 80
A+ Grade
(ii) 70 < marks 80
A Grade
(iii) 60 < marks 70
B Grade
(iv) 50 < marks 60
C Grade
(v) 40 < marks 50
D Grade
Generate all du-paths and write test cases for all du-paths.
4.14 Consider the program given below. Find all du-paths and identify those du-paths that
are definition clear. Also find all du-paths, all-uses and all-definitions and generate test
cases for these paths.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
#include<stdio.h>
#include<conio.h>
void main()
{
int custnum,numcalls,valid=0;
oat netamount;
clrscr();
printf("Enter customer number & number of calls:");
scanf("%d %d",&custnum,&numcalls);
if(custnum>10&&custnum<20000){
valid=1;
if(numcalls<0){
valid=-1;
}
}
if(valid==1){
if(numcalls<76){
netamount=500;
}
else if(numcalls>75&&numcalls<201){
netamount=500+0.80*(numcalls-75);
}
else if(numcalls>200&&numcalls<501){
netamount=500+1.00*(numcalls-200);
}
else{
netamount=500+1.20*(numcalls-500);
}
printf("\nCustomer number: %d\t Total Charges:%.3f",custnum,netamount);
}
else if(valid==0){
printf("Invalid customer number");
}
else{
printf("Invalid number of calls");
}
getch();
}
4.15 Consider the program for determination of the total telephone bill amount to be paid
by a customer given in exercise 4.14. Consider all variables and generate possible
program slices. Design at least one test case from every slice.
228
Software Testing
4.16 Consider the program for determination of the total telephone bill amount to be paid
by a customer given in exercise 4.14. Generate two first order mutants and one second
order mutant. Design a test suite of five test cases and calculate the mutation score of
the test suite.
4.17 Consider a program to input two numbers and print them in ascending order given
below. Find all du-paths and identify those du-paths that are definition clear. Also find
all du-paths, all-uses and all-definitions and generate test cases for these paths.
#include<stdio.h>
#include<conio.h>
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
void main()
{
int a,b,t;
clrscr();
printf("Enter rst number:");
scanf("%d",&a);
printf("Enter second number:");
scanf("%d",&b);
if(a<b){
t=a;
a=b;
b=t;
}
printf("%d %d",a,b);
getch();
}
4.18 Consider a program to input two numbers and print them in ascending order given in
exercise 4.17. Consider all variables and generate possible program slices. Design at
least one test case from every slice.
4.19 Establish the relationship between data flow testing and slice based testing.
4.20 What is the importance of mutation testing? Why is it becoming popular?
FURTHER READING
Copelands book provides introduction to levels of converge:
Lee Copeland, A Practitioners Guide to Software Test Design, Artech House,
2004.
Key concepts and definitions for structural data flow testing are given by Weyuker:
Weyuker, Elaine J. Data Flow Testing, In MARC94, pp. 247249.
The research paper explains the basic concepts of data flow testing with an example:
J. Badlaney R. Ghatol R. Jadhwani, An Introduction to Data-Flow Testing,
TR-2006-22, 2006.
Software verification has proved its effectiveness in the software world and its usage is
increasing day by day. The most important aspect of software verification is its implementation
in the early phases of the software development life cycle. There was a time when people used
to say that testing is a post-mortem activity where testers are only finding the damages already
been done and making changes in the program to get rid of these damages. Testing primarily
used to be validation oriented where the program was required for execution and was available
only in the later phases of software development. Any testing activity which requires program
execution comes under the validation category. In short, whenever we execute the program
with its input(s) and get output(s), that type of testing is known as software validation.
What is software verification? How can we apply this in the early phases of software
development? If we review any document for the purpose of finding faults, it is called verification.
Reviewing a document is possible from the first phase of software development i.e. software
requirement and analysis phase where the end product is the SRS document.
Verification is the process of manually examining / reviewing a document. The document may
be SRS, SDD, the program itself or any document prepared during any phase of software
development. We may call this as static testing because the execution of the program is not
required. We evaluate, review and inspect documents which are generated after the completion
of every phase of software development. As per IEEE, verification is the process of evaluating
the system or component to determine whether the products of a given development phase satisfy
the conditions imposed at the start of that phase [IEEE01]. Testing includes both verification and
validation activities and they are complementary to each other. If effective verification is carried
out, we may detect a number of faults in the early phases of the software development life cycle
and ultimately may be able to produce a quality product within time and budget.
After the completion of the implementation phase, we start testing the program by executing it.
We may carry out verification also by reviewing the program manually and examining the critical
areas carefully. Verification and validation activities may be performed after the implementation
phase. However, only verification is possible in the phases prior to implementation like the
requirement phase, the design phase and even most of the implementation phase.
5.1.2 Walkthroughs
Walkthroughs are more formal and systematic than peer reviews. In a walkthrough, the author
of the document presents the document to a small group of two to seven persons. Participants
are not expected to prepare anything. Only the presenter, who is the author, prepares for the
meeting. The document(s) is / are distributed to all participants. During the meeting, the author
introduces the material in order to make them familiar with it. All participants are free to ask
questions. All participants may write their observations on any display mechanism like boards,
sheets, projection systems, etc. so that every one may see and give views. After the review, the
author writes a report about findings and any faults pointed out in the meeting.
The disadvantages of this system are the non-preparation of participants and incompleteness
of the document(s) presented by the author(s). The author may hide some critical areas and
unnecessarily emphasize on some specific areas of his / her interest. The participants may not
be able to ask many penetrating questions. Walkthroughs may help us to find potential faults
and may also be used for sharing the documents with others.
5.1.3
Many names are used for this verification method like formal reviews, technical reviews,
inspections, formal technical reviews, etc. This is the most structured and most formal type of
verification method and is commonly known as inspections. These are different from peer
reviews and walkthroughs. The presenter is not the author but some other person who prepares
and understands the document being presented. This forces that person to learn and review that
document prior to the meeting. The document(s) is / are distributed to all participants in
advance in order to give them sufficient time for preparation. Rules for such meetings are fixed
and communicated to all participants. A team of three to six participants are constituted which
is led by an impartial moderator. A presenter and a recorder are also added to this team to
assure that the rules are followed and views are documented properly.
Every person in the group participates openly, actively and follows the rules about how such
a review is to be conducted. Everyone may get time to express their views, potential faults and
critical areas. Important points are displayed by some display mechanism so that everyone can
see them. The moderator, preferably a senior person, conducts such meetings and respects
everyones views. The idea is not to criticize anyone but to understand their views in order to
improve the quality of the document being presented. Sometimes a checklist is also used to
review the document.
After the meeting, a report is prepared by the moderator and circulated to all participants. They
may give their views again, if any, or discuss with the moderator. A final report is prepared after
incorporating necessary suggestions by the moderator. Inspections are very effective to find
potential faults and problems in the document like SRS, SDD, source code, etc. Critical
inspections always help find many faults and improve these documents, and prevent the
propagation of a fault from one phase to another phase of the software development life cycle.
5.1.4
All three verification methods are popular and have their own strengths and weaknesses. These
methods are compared on specific issues and this comparison is given in Table 5.1.
S.
Method
No.
Number of
Prior
1.
1 or 2
Not required
Optional
Inexpensive
some faults
2.
Walkthrough Author
2 to 7 participants
Only presenter
is required to
be prepared
Prepared Knowledge
by presharing
senter
3.
Inspections
3 to 6 participants
Expensive and
requires very
skilled participants
Someone
other than
author
The SRS verification offers the biggest potential saving to the software development effort.
Inspections must be carried out at this level. For any reasonably sized project, the SRS
document becomes critical and the source of many faults. Inspections shall improve this
document and faults are removed at this stage itself without much impact and cost. For small
sized projects, peer reviews may be useful but results are heavily dependent on the ability and
involvement of the reviewer. Walkthroughs are normally used to sensitize participants about
the new initiative of the organization. Their views may add new functionality or may identify
weak areas of the project.
Verification is always more effective than validation. It may find faults that are nearly impossible
to detect during validation. Most importantly, it allows us to find faults at the earliest possible time
and in the early phases of software development. However, in most organizations the distribution
of verification / validation is 20/80, or even less for verification.
5.2.1
The SRS should include the following:
(i) Expectations from the software: The SRS document should clearly specify what do
we expect from the software? and broadly describe functions of the software.
(ii) Interfaces of the software: The software will interact with many persons, hardware,
other devices and external software. These interfaces should be written and forms for
interaction may also be provided.
(iii) Non-functional requirements: These requirements are very important for the success
of the software. They may help us to design a performance criterion in terms of the
requirements response time, speed, availability, recovery time of various software
functions, etc. Some non-functional requirements become attributes of the software like
portability, correctness, maintainability, reliability, security, etc. These non-functional
requirements should also be properly placed in the SRS document.
(iv) Implementation difficulties and limitations: There may be some limitations of the programming language, database integration, etc. All constraints of project implementation including resource limitations and operating environment should also be specified.
The SRS writer(s) should not include design and implementation details. It should be written
in simple, clear and unambiguous language which may be understandable to all developers and
customers.
5.2.2
The SRS document acts as a contract between the developer and customer. This document should
have the following characteristics as given in IEEE recommended practice for software
requirements specifications (IEEE std. 830 1998) [IEEE98a]: Correct, unambiguous, complete,
consistent and ranked for importance and / or stability, verifiable, modifiable, traceable. These
characteristics should be checked and a good SRS document should address these issues.
The IEEE has published guidelines and standards to organize an SRS document (IEEE93,
IEEE98a). It provides different ways to organize the SRS document depending upon the nature
of the project. The first two sections of the SRS document are the same for all projects. The
specific tailoring occurs in section 3 entitled specific requirements. The general organization
of the SRS document is given in Table 5.2.
1.
2.
Introduction
1.1 Purpose0
1.2 Scope
1.4 References
1.5 Overview
The Overall Description
2.1 Product Perspective
2.1.1 System Interfaces
2.1.2 Interfaces
2.1.3 Hardware Interfaces
2.1.4 Software Interfaces
2.1.5 Communications interfaces
2.1.6 Memory Constraints
2.1.7 Operations
2.1.8 Site Adaptation Requirements
2.2 Product Functions
2.3 User Characteristics
2.4 Constraints
2.5 Assumptions and Dependencies
2.6 Apportioning of Requirements
3.
3.1 External interfaces
3.2 Functions
3.3 Performance Requirements
3.4 Logical Database Requirements
3.5 Design Constraints
3.5.1 Standards Compliance
3.6 Software System Attributes
3.6.1 Reliability
3.6.2 Availability
3.6.3 Security
3.6.4 Maintainability
3.6.5 Portability
4.
5.
6.
5.2.3
The SRS document is reviewed by the testing person(s) by using any verification method (like
peer reviews, walkthroughs, inspections, etc.). We may use inspections due to their effectiveness
and capability to produce good results. We may conduct reviews twice or even more often.
Every review will improve the quality of the document but may consume resources and
increase the cost of the software development.
A checklist is a popular verification tool which consists of a list of critical information content
that a deliverable should contain. A checklist may also look for duplicate information, missing
information, unclear information, wrong information, etc. Checklists are used during reviewing
and may make reviews more structured and effective. An SRS document checklist should
address the following issues:
(i)
Correctness
Every requirement stated in the SRS should correctly represent an expectation from
the proposed software. We do not have standards, guidelines or tools to ensure the
correctness of the software. If the expectation is that the software should respond to all
button presses within 2 seconds, but the SRS states that the software shall respond to all
buttons presses within 20 seconds, then that requirement is incorrectly documented.
(ii) Ambiguity
There may be an ambiguity in a stated requirement. If a requirement conveys more than
one meaning, it is a serious problem. Every requirement must have a single interpretation
only. We give a portion of the SRS document (having one or two requirements) to 10
persons and ask their interpretations. If we get more than one interpretation, then there
may be an ambiguity in the requirement(s). Hence, requirement statement should be
short, explicit, precise and clear. However, it is difficult to achieve this due to the
usage of natural languages (like English), which are inherently ambiguous. A checklist
should focus on ambiguous words and should have potential ambiguity indicators.
(iii) Completeness
The SRS document should contain all significant functional requirements and nonfunctional requirements. It should also have forms (external interfaces) with validity
checks, constraints, attributes and full labels and references of all figures, tables, diagrams,
etc. The completeness of the SRS document must be checked thoroughly by a checklist.
(iv) Consistency
Consistency of the document may be maintained if the stated requirements do not differ
with other stated requirements within the SRS document. For example, in the overall
description of the SRS document, it may be stated that the passing percentage is 50 in
result management software and elsewhere, the passing percentage is mentioned as
40. In one section, it is written that the semester mark sheet will be issued to colleges
and elsewhere it is mentioned that the semester mark sheet will be issued directly to
students. These are examples of inconsistencies and should be avoided. The checklist
should highlight such issues and should be designed to find inconsistencies.
(v) Verifiability
The SRS document is said to be verifiable, if and only if, every requirement stated therein
is verifiable. Non-verifiable requirements include statements like good interfaces,
excellent response time, usually, well, etc. These statements should not be used.
Section II
S. No.
1.
2.
3.
Yes/No/NA
Remarks
4.
5.
(Contd.)
(Contd.)
S. No.
Yes/No/NA
Remarks
6.
7.
8.
9.
10.
11
Ambiguity
Are functional requirements separated from non-functional
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
(Contd.)
(Contd.)
S. No.
34.
35.
36.
37.
38.
39.
Yes/No/NA
Remarks
40.
41.
42.
43.
44.
45.
46.
47.
48.
49.
50.
5.3.1
We may have different views about the essential aspects of software design. However, we have
the IEEE recommended practice for software design description (IEEE STD 1016-1998),
which is a popular way to organize an SDD document [IEEE98b]. The organization of SDD is
given in as per IEEE STD 1016-1998. The entity / attribute information may be organized in
several ways to reveal all the essential aspects of a design. Hence, there may be a number of
ways to view the design. Each design view gives a separate concern about the system. These
views provide a comprehensive description of the design in a concise and usable form that
simplifies information access and assimilation. Two popular design techniques are function
oriented design and object oriented design. We may use any approach depending on the nature
and complexity of the project. Our purpose is to prepare a quality document that translates all
requirements into design entities along with its attributes. The verification process may be
carried out many times in order to improve the quality of the SDD. The SDD provides a bridge
between software requirements and implementation. Hence, strength of the bridge is the
strength of the final software system.
5.3.2
The SDD document verification checklist may provide opportunities to reviewers for focusing
on important areas of the design. The software design starts as a process for translating
requirements stated in the SRS document in a user-oriented functional design. The system
developers, customers and project team may finalise this design and use it as a basis for a more
technical system design. A checklist may help to structure the design review process. There are
many ways to design a checklist which may vary with the nature, scope, size and complexity
of the project. One form of checklist is given in Table 5.4. However, organizations may modify
this checklist depending on software engineering practices and type of the project.
Section I
Name of reviewer
Organization
Group Number
Date of review
Project title
Section II
S. No.
Yes/No/NA
Remarks
1.
2.
(Contd.)
(Contd.)
S. No.
Yes/No/NA
3.
4.
-
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
Remarks
5.4.1
We should follow good software engineering practices to produce good quality maintainable
software within time at a reasonable cost. Source code reviews help us to achieve this objective.
Some of the recommended software engineering practices are given as:
1. Always use meaningful variables.
2. Avoid confusing words in names. Do not abbreviate Number to No; Num is a better
choice.
3. Declare local variables and avoid global variables to the extent possible. Thus, minimize
the scope of variables.
4. Minimize the visibility of variables.
5. Do not overload variables with multiple meanings.
6. Define all variables with meaningful, consistent and clear names.
7. Do not unnecessarily declare variables.
8. Use comments to increase the readability of the source code.
9. Generally, comments should describe what the source code does and not how the source
code works.
10. Always update comments while changing the source code.
11. Use spaces and not TABS.
12. All divisors should be tested for zero or garbage value.
13. Always remove unused lines of the source code.
14. Minimize the module coupling and maximize the module strength.
15. File names should only contain A-Z, a-z, 0-9, _ and ..
16. The source code file names should be all lower case.
17. All loops, branches and logic constructs should be complete, correct and properly nested
and also avoid deep nesting.
18.
19.
20.
21.
22.
23.
5.4.2
A checklist should at least address the above-mentioned issues. However, other issues may also
be added depending on the nature and complexity of the project. A generic checklist is given
in Table 5.5. We may also prepare a programming language specific checklist which may also
consider the specific language issues.
Section I
Name of reviewer
Organization
Group Number
Date of review
Project title
Section II
Table 5.5.
S. No.
Yes/No/NA
1.
2.
3.
4.
5.
6.
7.
8.
Are there any blocks of repeated source code that can be com-
9.
Remarks
10.
(Contd.)
(Contd.)
Variables
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
5.5.1
These documents should be reviewed thoroughly and proper consistency should be maintained
in all documents. The documents should be written in simple, clear and short sentences.
Installation procedure of the software must be explained step by step with proper justifications.
All tables, figures and graphs should be numbered properly. Explanations, if possible, should
be supported by suitable examples. A checklist may help to structure the review process and
must highlight these issues.
5.5.2
A checklist always helps the review process. A generic checklist for user documentation is
given in Table 5.6. However, this may be modified depending on the nature, complexity and
applicability of the project.
Section I
Name of reviewer
Organization
Group Number
Date of review
Project title
Section II
S. No.
1.
2.
3.
4.
5.
6.
7.
8.
Yes/No/NA
Remarks
9.
10.
(Contd.)
(Contd.)
S. No.
11.
Yes/No/NA
Remarks
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
5.6.1
A relevance scale has been given in project audit and review checklist to measure the relevance
of any attribute at the time of auditing the project. Many attributes have been identified in the
checklist. We have to find their relevance to the project at the state when the audit is being
conducted. The relevance scale is given as:
5.6.2
We may have to further indicate the strengths and weaknesses of the attributes given in project
audit and review checklist, in theory and practice on the scale as given below:
An attribute may be relevant moderately at one point of time and may not be relevant at
another point of time. The theory and practice scale is very useful and indicates the
implementation status of any attribute. The checklist also provides a column for assessment
where auditors may give their views, if required, about the attribute in addition to relevance
and practice columns.
This type of quantification is very useful to monitor the progress of the software project.
Auditors should always be non-judgmental and should have good communication skills. They
also need to behave in a positive way in order to get the clear and correct picture of the project.
Project audits must be carried out many times during development. They will definitely
improve the performance, quality and progress of the project.
5.6.3
This checklist has been designed by Hetty Baiz and Nancy Costa at Princeton University, New
Jersey, USA [HETT01] which has been used by many organizations. All activities are reviewed
on the basis of its relevance and strength/weakness at any point of time. Relevance scale and
theory and practice scale may help us to understand the status of various attributes. This may
also indicate the health of the project during its design implementation. An audit checklist is
given below which is to be filled using relevance scale and theory and practice scale.
Item
Attribute
1.1
1.2
b. Project Objectives
e. Time Line
f. Risk Analysis
we've met our goals)
1.3
1.4
Have project accountabilities and responsibilities
1.5
impacts been communicated to all involved
and/or impacted stakeholders and work
1.6
1.7
1.8
1.9
1.10
1.11
1.12
1.13
1.14
1.15
(Contd.)
Item
Attribute
1.16
2.1
2.2.1
2.2.2
2.2.3
2.2.4
2.2.5
2.2.6
2.2.7
2.2.8
2.2.9
2.2.10
2.2.11
2.2.12
2.2.13
(Contd.)
Item
2.2.14
Attribute
ers and adjustments made in overall project
2.2.15
2.2.16
2.2.17
2.2.18
2.3.1
2.3.2
a. Overall status
b. Project performance (achievements and
milestones)
c. Open issues
d. Risks
e. Action items
f. Cost and time performance against plan
g. Quality metrics
h. Client involvement
Are internal project status meetings held at
2.3.3
2.3.4
2.3.5
2.4.1
2.4.2
2.4.3
2.4.4
analyzed and used as a basis for other project
(Contd.)
(Contd.)
Item
2.4.5
Attribute
Is the PPO estimating methodology being used
2.4.6
2.4.7
a. Ranged estimates
b. Sensitivity analysis
c. Risk rating
d. Quality Assurance overheads
e. Contingency
Are project team members involved in detailed
2.4.8
2.5.1
2.5.2
2.5.3
2.5.4
Have all unimplemented risk strategies been
3.1
3.2
3.3
3.3.1
Quality Assurance
Has an overall Quality Assurance Plan been
3.3.2
a. Project Planning
b. Project Management
c. Software Quality Assurance (SQA)
3.3.3
3.3.4
a. SQA Plan
c. Software development standards and methods
d. Methodology
e. Testing Standards and Methodology
f. Data Architecture Standards
g. Data Naming Conventions
h. Technology Architecture
i. Software Metrics
(Contd.)
(Contd.)
Item
3.3.5
Attribute
Are the results of SQA reviews provided to
3.3.6
3.3.7
3.4
3.5
3.6
3.7.1
3.7.2
3.7.3
Is SCM version management and control effectively linked with the testing function to ensure
that integration and regression testing have
3.7.4
3.7.5
3.7.6
4.1.1
4.2.1
4.2.2
4.2.3
4.3.1
4.3.2
4.3.3
(Contd.)
(Contd.)
Item
5.1
5.2
Attribute
5.3
5.4
5.5
5.6
Has a proper project work location been established that will allow the team to work together
5.7
5.8
5.9
5.10
5.11
5.12
5.13
5.14
5.15
(Contd.)
Item
5.16
Attribute
Does the project team have a good understanding of the existing and/or proposed hardware /
5.17
5.18
5.19
5.20
6.1
6.2
6.3
6.4
6.5
6.6
including:
6.7
7.1.1
Methodologies
Is a recognized development method(s) been
7.1.2
does a documented process exist for effective
7.1.3
7.2.1
(Contd.)
(Contd.)
Item
7.2.2
Attribute
Does the CASE integration strategy include a
process for reverse integration (i.e. updating the
analysis tool if a change is made at the design
7.3
7.4
7.5.1
7.5.2
7.5.3
7.5.4
ence:
7.6.1
7.6.2
7.6.3
7.6.4
7.6.5
7.6.6
7.7.1
7.7.2
7.7.3
7.7.4
7.7.5
7.7.6
(Contd.)
(Contd.)
Item
7.7.7
Attribute
Is adequate allowance made for regression
7.7.8
7.7.9
8.1
8.2
8.3
8.4
8.5
8.6
8.7
Does the Application Architecture support information needs at all levels of user operations
8.8.1
8.8.2
9.1
9.2
9.3
9.4
9.5
9.6
9.7
(Contd.)
(Contd.)
Item
9.8
Attribute
Is the data dictionary fully integrated with the
9.9
9.10
9.11
9.12
9.13
9.14
9.15
9.16
9.17
9.18
10.1
10.2
10.3
10.4
10.5
10.6
10.7
10.8
10.9
10.10
10.11
10.12
10.13
Problem Statement
A university is organized in different teaching schools and each school conducts a variety of
programmes. Admissions to the various programmes offered by each school are done through
counselling. Admission slips are issued to the admitted students giving their Roll Numbers,
Name of the School and Name of the Programme. Students are registered in various schools
manually based on the admission slips. Students are assigned papers (compulsory, elective and
practical) depending upon the scheme of the selected programme. Every school is responsible
for its registration process and the following records are prepared and maintained manually:
1.
2.
3.
4.
5.
6.
The university decides to automate the manual registration process in order to improve the
existing system. The proposed system should perform the following functions:
(i)
(ii)
(iii)
(iv)
Issue of login Id and password to the members i.e. student and faculty.
Maintain the personal details of the students.
Maintain the details of the faculty.
Maintain the details of the various papers - Theory (compulsory and elective) and
practical as per the scheme of the programme.
(v) Issue of registration card to the student in every semester.
(vi) List of registered students
Roll number wise
Programme wise
Semester wise
Paper wise
(vii) List of programmes offered by the university.
(viii) List of papers offered in a particular semester for a particular programme.
(ix) List of faculty in a school.
Contents
1. Introduction
1.1. Purpose
1.2. Scope
1.1 Purpose
The University Registration System (URS) maintains the information regarding various papers
to be studied by a student in a particular programme. A paper may be a theory paper. A theory
paper may be of two types: compulsory paper and elective paper. Compulsory papers are
assigned automatically whereas a student has to select the elective papers of his/her choice in
a particular semester.
1.2 Scope
The proposed University Registration System shall perform the following functions:
(i) Issue of login Id and password to the members i.e. student and faculty.
(ii) Maintain the personal details of the students.
(iii) Maintain the details of the various papers - Theory (compulsory and elective) and
practical as per the scheme of the programme.
(iv) Issue of registration card to the student in every semester.
(v) List of registered students
Roll number wise
Programme wise
Semester wise
Paper wise
List of programmes offered by the university.
1.4 References
(a)
(b)
(c)
(d)
1.5 Overview
The rest of the SRS document describes various system requirements, interfaces, features and
functionalities in detail.
2. Overall Description
The URS registers a student for a semester to a programme offered by a school of a university.
It is assumed that the student has already been admitted in the university, for a specific
programme. The system administrator will receive lists of the admitted students (school-wise
and programme-wise) from the academic section responsible for counselling. The establishment
section will provide the list of the faculty members appointed in the school. Based on this
information, the system administrator will generate the login Id and password for the faculty
and the students.
The user can access URS on the Universitys LAN. Students are permitted to Add, Modify
and View their information only after successfully logging on to the system. After registration,
students can print their registration card. Faculty members can make the query about the
registered students and view/print the information of the registered students, papers offered in
the various programmes, etc. The system administrator is the master user of the URS and will
maintain the records of the students, faculty and generate their login Id and password.
The user will have to maintain the following information:
(i) Login details
(ii) School details
(iii) Programme details
(iv) Scheme details
(v) Paper details
(vi) Student details
(vii) Faculty details
The user requires the following reports from the proposed system:
(i) Registration card
(ii) List of registered students
Roll number wise
Programme wise
Semester wise
Paper wise
List of programmes offered by the university.
(iii) List of papers offered in a particular semester of a particular programme.
(iv) List of faculty in a school.
The software should generate the following viewable and printable reports:
(i) Registration Card: It will contain the roll number, name of the student, school, programme,
semester and the papers in which the student is registered. The registration card will be
generated after filling the necessary information in the student registration form.
(ii) List of Students: It will be generated roll number wise, programme wise, semester wise
and paper wise.
(iii)List of Programmes: It will give the details of programmes offered by various schools
of the university.
(iv)List of Papers: It will give the list of papers offered in a particular semester for a
particular programme.
2.1.8 Operations
None
2.4 Constraints
(i) There will be only one administrator.
(ii) The delete operation is available only to the administrator. To reduce the complexity of
the system, there is no check on the delete operation. Hence, the administrator should
be very careful before deletion of any record and he/she will be responsible for data
consistency.
This section contains the software requirements in detail along with the various screens to be
developed.
Login Form
This will be the first form, which will be displayed. It will allow the user to access the
different forms based on his/her role.
(ii)
Change Password
The change password form facilitates the user to change the password. Various fields
available on this form will be:
Login Id: Alphanumeric of 11 characters in length and digits from 0 to 9 only are
allowed. Special characters and blank spaces are not allowed.
Old Password: Alphanumeric in the range of 4 to 15 characters in length. Blank
spaces are not allowed. However, special characters are allowed.
New Password: Alphanumeric in the range of 4 to 15 characters in length. Blank
spaces are not allowed. However, special characters are allowed.
Confirm Password: Alphanumeric in the range of 4 to 15 characters in length. Blank
spaces are not allowed. However, special characters are allowed. The contents of this
field must match with the contents of the new password field.
Semester: This will display the number of all the semesters available in the selected
programme.
Core: This will display all the core papers in the semester selected by the user.
Elective: This will display all the elective papers available in the semester selected by
the user.
(iii)
(iv)
(v)
(vi)
B. Sequencing information
None
D. Error Handling/Response to Abnormal Situations
If the flow of any of the validations does not hold true, an appropriate error message will be
prompted to the user for doing the needful.
(v) The number of theory papers can have a value between 0 and 10.
(vi) The number of elective papers cannot be blank.
(vii) The number of elective papers can have a value between 0 and 10.
(viii) The number of practical papers cannot be blank.
(ix) The number of practical papers can have a value between 0 and 10.
(x) The semester cannot be blank.
(xi) The semester can have a value between 1 and 14.
(xii) The total credit cannot be blank.
(xiii) The total credit can have a value between 5 and 99.
B. Sequencing information
The school and programme details will have to be entered into the system before any scheme
details can be entered into the system.
C. Error Handling/Response to Abnormal Situations
If the flow of any of the validations/sequencing does not hold true, an appropriate error
message will be prompted to the user for doing the needful.
Portability
The application will be easily portable on any windows-based system that has SQL Server
installed.
Paper
Student
Faculty
StudentPaperList
RegistrationOpen
Yes/No/NA
Remarks
1.
2.
3.
No
No
Yes
Refer A
Refer B
-
NA
Yes
4.
5.
6.
Yes
7.
8.
Yes
Yes
Refer C
No
No
Refer D
Refer E
Yes
9.
10.
11.
Ambiguity
Are functional requirements separated from non-functional
(Contd.)
(Contd.)
S. No.
Yes/No/NA
Remarks
12.
13.
14.
No
Yes
No
15.
No
16.
17.
18.
19.
No
Yes
No
No
Refer D
Refer F
Refer G
20.
No
Refer H
21.
22.
23.
24.
No
Yes
No
NA
Refer I
Refer I
-
No
Yes
No
Refer J
28.
29.
30.
Yes
NA
Yes
Refer K
-
31.
32.
33.
34.
35.
Yes
Yes
Yes
Yes
No
36.
37.
38.
Yes
No
Yes
39.
NA
Yes
Yes
25.
26.
27.
40.
ing of each requirement in future development and enhance41.
(Contd.)
(Contd.)
S.No.
42.
43.
44.
45.
Yes/No/NA
No
No
No
No
Remarks
-
46.
47.
Yes
Yes
48.
49.
No
No
Refer L
Yes
50.
Remarks
A.
B.
C.
D.
E.
F.
G.
H.
(ii) In section 2.2, product functions do not specify a major function that the URS would
perform i.e. add/modify/delete faculty details.
I. In section 3.1.1, the details about format/layout of registration card and reports have not
been provided.
J. An ambiguous word User has been used throughout the SRS without specifying that the specified
user under question is either an administrator, data entry operator, student or a faculty.
K. The word user is non verifiable.
L. In section 3.1.4, the communication interfaces have not been stated.
The corrected SRS is provided in Appendix A.
5.25 The later in the development life cycle a fault is discovered, the more expensive it is
to fix. Why?
(a) Due to poor documentation, it takes longer to find out what the software is doing.
(b) Wages are rising.
(c) The fault has already resulted in many faults in documentation, generated faulty
source code, etc
(d) None of the above
5.26 Inspections can find all of the following except:
(a) Variables not defined in the source code
(b) Omission of requirements
(c) Errors in documents and the source code
(d) How much of the source code has been covered
5.27 During software development, when should we start testing activities?
(a) After the completion of code
(b) After the completion of design
(c) After the completion of requirements capturing
(d) After the completion of feasibility study
5.28 In reviews, the moderators job is to:
(a) Prepare minutes of the meeting
(b) Prepare documents for review
(c) Mediate between participants
(d) Guide the users about quality
5.29 What can static analysis not identify?
(a) Memory leaks
(b) Data of defined variables but which are not used
(c) Data of variables used but not defined
(d) Array bound violations
5.30 Which of the following statements are not true?
(a) Inspections are very important for fault identifications.
(b) Inspections should be led by a senior trained person.
(c) Inspections are carried out using documents.
(d) Inspections may often not require documents.
EXERCISES
5.1 Differentiate between verification and validation. Describe various verification
methods.
5.2 Which verification method is most popular and why?
5.3 Describe the following verification methods:
(a) Peer views
(b) Walkthroughs
(c) Inspections
5.4 Explain the issues which must be addressed by the SRS document checklist.
5.5 Discuss the areas which must be included in a good SDD design checklist. How is it
useful to improve the quality of the document?
5.6 Discuss some of the issues related to source code reviews. How can we incorporate
these issues in the source code review checklist?
5.7 Design a checklist for user documentation verification.
5.8 Why do we opt for software project audit? What are the requirements of a relevance
scale and theory and practice scale? Discuss some of the issues which must be
addressed in project audit and review checklist.
5.9 Establish a relationship between verification, validation and testing. Which is most
important and why?
5.10 Discuss some characteristics which the SRS document must address. How can these be
incorporated in a checklist?
5.11 What is the purpose of preparing a checklist? Discuss with the help of a checklist.
5.12 What types of reviews are conducted throughout the software development life
cycle?
5.13 With the help of an example, explain how you will review an SRS document to ensure
that the software development has been correctly carried out.
5.14 What are the differences between inspections and walkthroughs? Compare the relative
merits of both.
5.15 Could review and inspections be considered as part of testing? If yes, why? Give
suitable examples.
FURTHER READING
Horch presents reviews as one of the elements of software quality system. Chapter 3
of the book gives a full account on reviews:
John W. Horch, Practical Guide to Software Quality Management, Artech
House, 2003.
The books by Rakitin and Hollocker provide a full account on how to start a review.
Charles P. Hollocker, Software Reviews and Audits Handbook, New York:
John Wiley & Sons, 1990.
Steve Rakitin, Software Verification and Validation for Practitioners and
Managers, Second Edition, Norwood, MA: Artech House, 2001.
The book by Tom provides an excellent introduction on software inspection and is full
of a large number of real-life case studies.
Gilb, Tom and D. Graham, Software Inspections, MA: Addison-Wesley,
1993.
Fagan shows that by using inspection, cost of errors may be reduced significantly in
the initial phases of software development.
Fagan, M. E., Design and Code Inspections to Reduce Errors in Program
Development, IBM Systems Journal, vol. 15, no. 3, 1976.
Strauss provides a comprehensive guide to software inspections method that may
reduce program defects in the early phases of software design and development:
S.H Strauss, Susan H., and Robert G. Ebenau, Software Inspection Process,
New York: McGraw-Hill, 1994.
6
Creating Test Cases from Requirements
and Use Cases
We prepare Software requirements and specifications document to define and specify user
requirements. In the initial years of software development, requirement writers used to write
stories to explain the expected behaviour of the system and its interactions with the external
world. Ivar Jacobson and his team [JACO99] gave a new dimension and direction to this area
and developed a Unified Modeling Language (UML) for software development. They
introduced use case approach for requirements elicitation and modeling. This is a more formal
way to write requirements. The customer knows what to expect, the developer understands
what to code, the technical writer comprehends what to document and the tester gets what to
test. The use cases address primarily the functional requirements, meaning thereby, the
perspective of the users sitting outside the system. Use cases capture the expectations in terms
of achieving goals and interactions of the users with the system.
The IEEE Std 830-1998 requires us to follow a systematic approach which may include the
design of use cases, various forms for interaction with the user, data validations, reports, error
handling and response to unexpected situations. This is an important document designed in the
initial phases of the software development. In this chapter, techniques have been discussed to
design test cases from requirements. Database testing has also been introduced to design test
cases using interface forms.
286
Software Testing
A use case diagram visually explains what happens when an actor interacts with the system.
Actor represents the role of a user that interacts with the system. They are outsiders to the
system and can be human beings, other systems, devices, etc. We should not confuse the actors
with the devices they use. Devices are mechanisms that actors use to communicate with the
system, but they are not actors themselves. We use the computer keyboard for interaction; in
such a case, we are the actors, and not the keyboard that helps us to interact with the computer.
We use the printer to generate a report; in such case, the printer does not become an actor
because it is only used to convey the information. However, if we want to take information
from an external database, then, this database becomes an actor for our system.
A use case is started by a user for a specific purpose and completes when that purpose is
satisfied. It describes a sequence of actions a system performs to produce an observable output
for the interacting user (actor). The importance of a use case is effectively given by Greg
Fournier [FOUR09] as:
The real value of a use case is the dynamic relationship between the actor
and the system. A well written use case clarifies how a system is used by the
actor for a given goal or reason. If there are any questions about what a
system does to provide some specific value to someone or something outside
the system, including conditional behaviour and handling conditions of when
something goes wrong, the use case is the place to find the answers.
A use case describes who (any user) does what (interaction) with the system, for what goal,
without considering the internal details of the system. A complete set of use cases explains the
various ways to use the system. Hence, use cases define expected behaviours of the system and
helps us to define the scope of the system.
An actor represents the role of a user that interacts with the system. An actor may be a
human being or a system that may interact with a use case keeping in view a particular
goal in mind. Some of the examples of the actors used in the case study of University
registration system (discussed in Section 5.7) are given as:
(i)
(ii)
(iii)
(iv)
Administrator
Student
Faculty
Data entry operator
The URS will allow the above actors to interact with the system with their specific roles.
Depending upon the role, an actor will be able to access only the defined information from the
system. We may define the role of every actor as:
(i)
(ii)
Use Case
Actors
Description
1.
Login
Administrator, student,
faculty, DEO
Login
Administrator
Add School
2.
Maintain School
Details
Change password
Edit School
Delete School
View School
3.
Maintain Programme
Details
Administrator
Add Programme
Edit Programme
Delete Programme
View Programme
4.
Maintain Scheme
Details
Administrator
Add Scheme
Edit Scheme
Delete Scheme
View Scheme
(Contd.)
288
Software Testing
(Contd.)
S. No.
Use Case
Actors
5.
Maintain Paper
Details
Administrator
Description
Add Paper
Edit Paper
Delete Paper
View Paper
6.
Maintain Student
Details
Administrator, DEO
Add Student
Edit Student
Delete Student
View Student
7.
Maintain Faculty
Details
Administrator, DEO
Add Faculty
Edit Faculty
Delete Faculty
View Faculty
8.
Maintain Student
Registration Details
Administrator, student
9.
Generate Report
Administrator, faculty
10.
Generate Registration
Card
Administrator, student
We should identify use cases very carefully, because it has serious implications on the
overall design of the system. Use cases should not be too small or too big. The basic flow and
all alternative flows should also be specified. Identifying and writing good use cases means
providing better foundations for the intended system.
Actors appear outside of a system. A relationship is shown by an arrow and is between the
actor and a use case and vice versa. A relationship between a user (actor) and login use case
is shown as:
If the system is small, one diagram may be sufficient to represent the whole system, but for
large systems, we may require to represent the whole system in many diagrams. The use case
diagram of the URS is given in Figure 6.2. There are ten use cases and four actors. The
administrator interacts with all use cases, whereas a student may interact only with Login,
Maintain student registration details and Generate registration card use cases.
290
Software Testing
Basic Flow: It is the main flow and describes the sequence of events that takes place
most of the time between the actor and the system to achieve the purpose of the use
case.
Alternative Flows: If the basic flow is not successful due to any condition, the
system takes an alternative flow. An alternative flow may occur due to failure of an
expected service because of occurrence of exceptions/errors. There may be more
than one alternative flow of a use case, but may not occur most of the time. Any
alternative flow takes place under certain conditions in order to fulfil the purpose of
a use case.
There is no standard method for writing use cases. Jacobson et al. [JACO99] has given a
use case template which is given in Table 6.1. This captures the requirements effectively and
has become a popular template. Another similar template is given in Table 6.2 which is also
used by many companies [COCK01, QUAT03]. All pre-conditions that are required for the use
case to perform should be identified. Post conditions, which will emerge after the execution of
a use case, should also be defined. The pre-condition is necessary for the use case to start but
is not sufficient to start the use case. The use case must be started by an actor when the precondition is true. A post-condition describes the state of the system after the ending of the use
case. A post-condition for a use case should be true regardless of which flow (basic or any
alternative flows) is executed.
Table 6.1. Jacobsons use case template
1.
2.
Actors. List the actors that interact and participate in this use case.
3.
Flow of Events.
3.1.
List the primary events that will occur when this use case is executed.
3.2.
Any subsidiary events that can occur in the use case should be
4.
Special Requirements.
special requirements in the use case narration. These business rules will also be used for
writing test cases. Both success and failure scenarios should be described here.
5.
Pre-conditions.
be listed.
6.
Post-conditions.
use case executes.
7.
2.
Actors. List the actors that interact and participate in this use case.
3.
Pre-condition.
4.
Post-condition. After the execution of the use case, different states of the systems are
5.
Flow of Events.
5.1.
List the primary events that will occur when this use case is executed.
5.2.
6.
Special Requirements.
as special requirements. Both success and failure scenarios should be described.
Associated use cases. List the related use cases, if any.
7.
We may write a Login use case description of the URS using the template given in Table
6.2 and the same is given below:
Use Case Description of login use case
1
2
3
4
Introduction
This use case documents the steps that must be followed in order to log into the URS
Actors
Administrator
Student
Faculty
Data Entry Operator
Pre-Condition
The user must have a valid login Id and password.
Post-Condition
If the use case is successful, the actor is logged into the system. If not, the system state
remains unchanged.
Basic Flow
It starts when the actor wishes to login to the URS.
(v) The system requests that the actor specify the function he/she would like to perform
(either Login, Change Password).
(vi)
word information.
(iii)
(Contd.)
292
Software Testing
Special Requirement
None
The use cases describe the flow of events which include the basic flow and alternative flows
and this description should be long enough to clearly explain its various steps. The basic flow
and alternative flows are written in simple and clear sentences in order to satisfy all the
stakeholders. A login use case, which allows entering the correct login Id and password, has
two basic flows (the user is allowed to enter after giving the correct login Id and password and
change password) and many alternative flows (incorrect login Id and/or password, invalid
entry and user Exits). If an alternative flow has other alternative flows, the use case may have
a longer description of the flows and may become a complex use case.
We should write the basic flow independently of the alternative flows and no knowledge of
alternative flows is considered. The basic flow must be complete in itself without reference to
the alternative flows. The alternative flow knows the details of when and where it is applicable
which is opposite to the basic flow. It inserts into the basic flow when a particular condition is
true [BITT03].
If all steps are followed in the above mentioned sequence, we may have a good number of
planned and systematic test cases which will result in an efficient and effective testing
process.
Figure 6.3. Basic and alternative flows with pre- and post-conditions
The basic flow is represented by a straight arrow and the alternative flows by the curves.
Some alternative flows return to the basic flow, while others end the use case. At the end of the
basic flow, a post-condition is generated while at the starting of the basic flow, a pre-condition
is required to be set.
There are the following basic and alternative flows in login use case:
Basic flow:
(i) Login
(ii) Change password
Alternative flows:
(i) Invalid Login Id/password
(ii) Invalid entry
(iii) User exits
294
Software Testing
The basic and alternative flows for login use case are given in Figure 6.4. In Figure 6.4 (a),
there is one basic flow which will be executed when the correct login Id and password are
given. This basic flow is expected to be executed most of the time. If any input (Login Id or
password) is invalid, then the alternative flow will be executed and the actor will return to the
beginning of the basic flow. If at any time, the user decides to exit, then alternative flow 3 will
be executed.
Figure 6.4. Basic and alternative flows for login use case (a) Login (b) Change password
Alternative Flow 1: Invalid login Id/password
Alternative Flow 2: Invalid Entry
Alternative Flow 3: User exits
Basic Flow
Scenario 2
Basic Flow
Alternative Flow 1
Scenario 3
Basic Flow
Alternative Flow 1
Scenario 4
Basic Flow
Alternative Flow 3
Scenario 5
Basic Flow
Alternative Flow 3
Alternative Flow 4
Scenario 6
Basic Flow
Alternative Flow 3
Alternative Flow1
Scenario 7
Basic Flow
Alternative Flow 3
Alternative Flow 1
Scenario 8
Basic Flow
Alternative Flow 5
Alternative Flow 2
Alternative Flow 2
(Contd.)
Basic Flow
Alternative Flow 5
Alternative Flow 6
Scenario 10
Basic Flow
Alternative Flow 3
Alternative Flow 5
Scenario 11
Basic Flow
Alternative Flow 3
Alternative Flow 5
Alternative Flow 6
In the basic and alternative flows scenario diagram of the login use case, there are six
possible paths (see Figure 6.4). These six paths become six scenarios of login use case and are
given in Table 6.4. Moreover, the path Basic Flow 1, Alternative Flow 1 and Alternative Flow
3 is impossible as per the use case description, because after giving incorrect login ID/
password, the actor returns to the beginning of the basic flow 1. Similarly, both Basic Flow 2,
Alternative Flow 2 and Alternative Flow 3 are also impossible. All valid combinations of the
basic flow and the alternative flows may be generated as per given use case description.
Table 6.4. Scenario matrix for the login use case
Scenario 1- Login
Basic Flow 1
Basic Flow 1
Alternative Flow 1
Basic Flow 1
Alternative Flow 3
Basic Flow 2
Basic Flow 2
Alternative Flow 2
Basic Flow 2
Alternative Flow 3
Login Id
Password
Old password
New password
Confirm password
These variables are inputs to the system and when an input or combination of specified
inputs is given, a particular behaviour (output) is expected from the system. Hence,
identification of these variables is important and helps in designing the test cases.
296
Software Testing
Input 1
Input 2
Input 3
(selection
variable)
Expected Output
TC1
TC2
TC3
TC4
The test case matrix for login use case is given in Table 6.6.
Scenario 2- Login
TC2
Scenario 4- Change
password
Scenario 5- Change
TC8
Valid input
TC12
n/a
n/a
Valid input
TC11
n/a
Valid input
n/a
Invalid input
Invalid input
Scenario 3- Login
TC7
TC9
Invalid input
TC6
Valid input
Valid input
TC5
Invalid input
Valid input
Valid input
Valid input
Valid input
Invalid input
Valid input
Password
Login id
Valid input
Input 2
Input 1
TC4
Invalid Entry
Scenario 1- Login
TC1
TC3
Test
case
Id
Table 6.6. Test case matrix for the login use case
Valid /Invalid
input
Valid input
Valid input
Valid /Invalid
input
Invalid input
Valid input
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Old password
Input 3
Valid /Invalid
input
Valid input
Invalid input
Valid /Invalid
input
Valid /Invalid
input
Valid input
n/a
n/a
n/a
n/a
n/a
n/a
n/a
New password
Input 4
Password invalid
Password invalid
Login id invalid
Login id invalid
Expected output
Password is changed
in the database
Login Id is not in the
format
--
--
n/a
n/a
n/a
n/a
n/a
n/a
n/a
password
Input 5
n/a
n/a
n/a
n/a
n/a
n/a
Abc123
Abc123
*
Abc1
Abc123
n/a
n/a
n/a
n/a
n/a
n/a
Old
password
n/a
Abc124
R12
*
*
Abc124
n/a
n/a
n/a
n/a
n/a
n/a
New
password
n/a
Abc125
*
*
Abc124
n/a
n/a
n/a
n/a
n/a
n/a
password
n/a
*: do not care conditions (valid/invalid inputs); n/a: option(s) not available for respective scenario
Exit
Scenario 6*
Change password
01164521657
TC12
TC13
01164521657
Invalid entry
Exit
Scenario 401164521657
Change password
Scenario 501165
Change password 01164521657
Scenario 3- Login *
TC11
TC9
TC10
TC8
TC7
01164521657
1234
Abc124
R34
Abc123
R34
Invalid Entry
TC3
TC4
TC5
TC6
Abc123
TC2
01164521658
01164521657
Abc123
Password
Test
Scenario Name Login Id
case Id and description
TC1
Scenario 1- Login 01164521657
Table 6.7.
--
Old password does not match the corresponding password in the database. Other
--
New password
invalid
User is allowed to
change password
Login Id invalid
Old password
invalid
Login id invalid
Password invalid
User is allowed to
login
Login id invalid
Expected output
298
Software Testing
The use cases are available after finalizing the SRS document. If we start writing test cases
in the beginning, we may be able to identify defects at the early phases of the software
development. This will help to ensure complete test coverage as a complete test suite will be
designed directly from the use cases. This technique is becoming popular due to its applicability
in the early phases of software development. The technique is simple and directly applicable
from the use cases that are part of the SRS which is designed as per IEEE standard 830-1998.
Example 6.1: Consider the problem statement of the URS as given in chapter 5. Write the use
case description of use cases and generate test cases from these use cases.
Solution:
The use case description of maintain school details use case is given below:
1 Introduction
Allow the administrator to maintain details of schools in the university. This includes adding, updating, deleting and viewing school information.
2 Actors
Administrator
3 Pre-Conditions
The administrator must be logged onto the system before this use case begins.
4 Post-Conditions
If the use case is successful, the school information is added/updated/deleted/viewed from the
system. Otherwise, the system state is unchanged.
5 Basic Flow
This use case starts when the administrator wishes to add/edit/delete/view school information.
(i) The system requests that the administrator specify the function he/she would like to perform
(either Add a school, Edit a school, Delete a school or View a school).
(ii)
If the administrator selects Add a School, the Add a School
If the administrator selects Edit a School, the Edit a School
If the administrator selects Delete a School, the Delete a School
If the administrator selects View a School, the View a School
Basic Flow 1: Add a School
The system requests that the administrator enter the school information. This includes:
(i) The system requests the administrator to enter the:
1. School name
2. School code
(ii) Once the administrator provides the requested information, the school is added to the system.
Basic Flow 2: Edit a School
(i) The system requests the administrator to enter the school code.
(ii) The administrator enters the code of the school. The system retrieves and displays the school
name information.
(iii) The administrator makes the desired changes to the school information. This includes any of the
(iv)
(v)
tion.
(Contd.)
300
Software Testing
(Contd.)
Basic Flow 3: Delete a School
(i) The system requests the administrator to specify the code of the school.
(ii) The administrator enters the code of the school. The system retrieves and displays the school
information.
(iii)
(iv)
(v) The system deletes the school record.
Basic Flow 4: View a School
(i) The system requests that the administrator specify the school code.
(ii) The system retrieves and displays the school information.
6 Alternative Flows
Alternative Flow 1: Invalid Entry
If in the Add a School or Edit a School
actor leaves the school name/code blank, the system displays an error message. The actor returns to
Alternative Flow 2: School Code Already Exists
If in the Add a School
The Use Case Scenario diagram of Maintain school details use case is given in Figure 6.5
and the scenario matrix is given in Table 6.8. The test case matrix is given in Table 6.9 and
corresponding matrix with actual data values is given in Table 6.10.
Figure 6.5. Basic and alternative flows for maintain school, programme, scheme, paper, or
student details use cases (a) Add details (b) Edit details (c) Delete details (d) View details
The scenario diagram is the same for Maintain Programme details, Maintain Scheme
details, Maintain Paper details, and Maintain Student details.
Maintain School Details
302
Software Testing
(Contd.)
Maintain Scheme Details
Table 6.8.
Scenario 1- Add a school
Basic Flow 1
Basic Flow 1 Alternative Flow 1
Basic Flow 1 Alternative Flow 2
exists
Scenario 5- Edit a school
Basic Flow 1
Basic Flow 2
Basic Flow 2
Basic Flow 2
Basic Flow 2
Basic Flow 2
Basic Flow 3
Basic Flow 3
Basic Flow 3
Basic Flow 3
Alternative Flow 7
Alternative Flow 1
Alternative Flow 3
Alternative Flow 4
Alternative Flow 7
Alternative Flow 3
Alternative Flow 5
Alternative Flow 6
allowed
Scenario 15- View a school
As shown in Table 6.8, there are 17 scenarios for Maintain School Details use case. For
Maintain School Details use case, we identify four input variables for various basic flows in
the use case. There are two input variables (school code, school name) and two selection
variables (edit confirmed, delete confirmed) in this use case. These inputs will be available for
the respective flows as specified in the use case.
Scenario 7- Edit a
school alternative
Scenario 5- Edit a
school
Scenario 6- Edit a
school alternative
TC9
TC8
TC7
TC6
already exists
Scenario 4- Add a
school alternative
Scenario 3- Add a
school alternative
TC4
TC5
Scenario 2- Add a
school alternative
Scenario 1- Add a
school
Scenario 2- Add a
school alternative
TC3
TC2
TC1
Table 6.9.
Valid input
Valid /Invalid
input
Valid /
Invalid
input
n/a
Valid/invalid
input
Valid input
Valid input
Invalid
input
Valid input
Valid /Invalid
input
Valid input
Valid input
Valid /
Invalid
input
Valid input
Invalid input
Valid/invalid
input
Valid input
Input 2
School name
Valid input
Invalid
input
Input 1
School
code
Valid input
n/a
No
n/a
n/a
Yes
n/a
n/a
n/a
n/a
n/a
Edit
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Deletion
format
--
(Contd.)
Expected result
Valid input
Valid /Invalid
input
n/a
n/a
Valid /Invalid
input
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Edit
n/a
n/a
n/a
n/a
n/a
No
n/a
Yes
Deletion
School is displayed
successfully
Expected result
There are 18 test cases created for the given 17 scenarios as shown in Table 6.9. Two test cases are designed for scenario 2. After
constructing these test cases, actual input values are given to all the variables in order to generate actual output and verify whether the
test case passes or fails (refer Table 6.10).
User exits
Valid input
User exits
Valid /
Invalid
input
Valid input
Valid input
Delete cancelled
Valid input
n/a
School name
School
code
Valid input
Input 2
Input 1
(Contd.)
304
Software Testing
School School
ID
Name
Edit
Deletion
Expected
result
Remarks (if
any)
101
n/a
n/a
School is
added successfully
--
TC2
1001
n/a
n/a
University
School of
Information
technology
*
length
TC3
entry
Scenario 2Add a school
alternative
101
12univ
n/a
n/a
entry
102
n/a
University
School of
Management
Studies
n/a
TC5
code already
exists
Scenario 4Add a school
alternative
n/a
n/a
TC6
exits
Scenario 5Edit a school
102
Yes
University
School of
Management
Studies
univ
n/a
n/a
School is
updated successfully
n/a
TC4
TC7
101
entry
TC8
TC9
103
not found
Scenario 8101
Edit cancelled
n/a
n/a
n/a
School not
found
University
School of
Information
technology
No
n/a
Main screen
of school
appears
--
which is
less than 10
characters
School code
does not
exist in the
database
User does
the edit
operation
(Contd.)
306
Software Testing
(Contd.)
Test Scenario and
case description
Id
TC10 Scenario 9Edit a school
alternative
School School
ID
Name
Edit
Deletion
Expected
result
n/a
n/a
-User is
allowed to exit
and returns to
Main menu
101
n/a
n/a
Yes
103
n/a
n/a
n/a
School is
deleted successfully
School not
found
102
n/a
n/a
No
Main screen
of school
appears
--
102
n/a
n/a
n/a
Deletion not
allowed
Programme
of the
school
exists
Deletion not
allowed
TC15 Scenario
14- Delete a
school alter-
n/a
n/a
-User is
allowed to exit
and returns to
Main menu
User exits
TC16 Scenario 15View a school
101
n/a
n/a
n/a
103
n/a
n/a
n/a
School is
-displayed successfully
School not
School code
found
does not
exist in the
database
n/a
n/a
exits
TC11 Scenario
10- Delete a
school
TC12 Scenario
11- Delete a
school alterSchool not
found
TC13 Scenario
12- Delete
a school
alternative
cancelled
TC14 Scenario
13- Delete a
school alter-
Remarks (if
any)
--
School code
does not
exist in the
database
-User is
allowed to exit
and returns to
Main menu
The use case description of Maintain programme details use case is given below:
1.
2.
3.
4.
5.
Introduction
Allow the administrator to maintain details of the programme in the school. This includes adding,
updating, deleting and viewing programme information.
Actors
Administrator
Pre-Conditions
The administrator must be logged onto the system and school details for which the programme
details are to be added/updated/deleted/viewed must be available in the system before this
use case begins.
Post-Conditions
If the use case is successful, the programme information is added/updated/deleted/viewed
from the system. Otherwise, the system state is unchanged.
Basic Flow
This use case starts when the administrator wishes to add/edit/delete/view programme information
(i) The system requests that the administrator specify the function he/she would like to perform
(either Add a programme, Edit a programme, Delete a programme or View a programme)
(ii)
If the administrator selects Add a Programme, the Add a Programme
If the administrator selects Edit a Programme, the Edit a Programme
If the administrator selects Delete a Programme, the Delete a Programme
If the administrator selects View a Programme, the View a Programme
Basic Flow 1: Add a Programme
The system requests that the administrator enters the programme information. This includes:
(i) The system requests the administrator to select an already existing school and also enter:
1. Programme name
2. Duration (select through drop down menu)
3. Number of semesters
4. Programme code
(ii) Once the administrator provides the requested information, the programme is added to the
system.
Basic Flow 2: Edit a Programme
(i) The system requests that the administrator enters the programme code.
(ii) The administrator enters the programme code. The system retrieves and displays the programme information.
(iii) The administrator makes the desired changes to the programme information. This includes
Add a Programme
(iv)
(v)
information.
Basic Flow 3: Delete a Programme
(i) The system requests that the administrator specify the programme code.
(ii) The administrator enters the programme code. The system retrieves and displays the programme information.
(iii)
(iv)
(v) The system deletes the programme record.
Basic Flow 4: View a Programme
(i) The system requests that the administrator specify the programme code.
(ii) The system retrieves and displays the programme information.
(Contd.)
308
Software Testing
(Contd.)
6.
Alternative Flows
Alternative Flow 1: Invalid Entry
duration/number of semesters/programme code or the actor leaves the programme/duration/
number of semesters/programme code empty, the system displays an error message. The actor
Alternative Flow 2: Programme code already exists
7.
8.
The Use Case Scenario of Maintain programme details use case is given in Figure 6.5 and
the scenario matrix is given in Table 6.11.
Table 6.11.
Scenario 1- Add a programme
Basic Flow 1
Basic Flow 1 Alternative Flow 1
Basic Flow 1 Alternative Flow 2
already exists
Basic Flow 1
Basic Flow 2
Basic Flow 2
Basic Flow 2
Basic Flow 2
Basic Flow 2
Basic Flow 3
Basic Flow 3
Alternative Flow 7
Alternative Flow 1
Alternative Flow 3
Alternative Flow 4
Alternative Flow 7
Alternative Flow 3
found
Basic Flow 3 Alternative Flow 5
Basic Flow 3 Alternative Flow 6
allowed
Scenario 15-View a programme
found
Basic Flow 4 Alternative Flow 7
Valid input
do
TC6
TC7
Programme
code already
exist
Valid input
Yes
do
TC5
Yes
Yes
do
Valid
input
Valid
input
TC4
Yes
Valid/
Valid/
Invalid input Invalid
input
No
Invalid entry
do
Scenario
2- Add a
programme
TC2
Input 2
Input 3
Programme Duration
name
Valid input Valid
input
Input 1
School
selected
Yes
TC3
Scenario
Name and
description
Scenario 1- Add
a programme
Test
case
Id
TC1
Table 6.12.
Input 5
Edit
Programme
code
Valid input
n/a
Valid input
Valid input
Valid input
n/a
Input 4
Number of
semesters
Valid input
n/a
n/a
n/a
n/a
n/a
--
Programme code is
Duration is not
selected
--
(Contd.)
format
Programme Entry with the
code already same programme
exists
code already exists
in the database
Duration
invalid
Number of
semesters
invalid
Programme
code invalid
User is
allowed to
add a programme
School not
selected
n/a
n/a
Expected
output
Deletion
confirmed. The test case matrix is given in Table 6.12 and the corresponding matrix with actual data values is given in Table 6.13.
From the use case, we identify seven input variables, out of which four are selection variables. The input variables are programme name,
duration, number of semesters and programme code. The selection variables include school name, duration, edit confirmed and delete
Input 1
School
selected
Scenario 4- Add Yes
a programme
User exits
n/a
TC12 do
Edit cancelled
Programme not
found
TC14 Scenario 8- Edit n/a
a programme
n/a
TC11 do
Invalid entry
TC9
TC8
Test Scenario
case Name and
description
Id
(Contd.)
Valid
input
Valid input
n/a
Valid input
Valid input
Valid
input
n/a
Invalid
input
Valid
input
Valid input
Yes
Input 5
Edit
Programme
code
Valid /
n/a
Invalid input
Valid input
n/a
Valid input
Valid input
No
n/a
Valid/
Valid/
n/a
Invalid input Invalid input
Invalid input Valid/
n/a
Invalid input
Valid/
Valid/
n/a
Invalid input Invalid input
Valid input
Input 3 Input 4
Duration Number of
semesters
Valid /
Valid /
Invalid
Invalid input
input
Valid input
Input 2
Programme
name
Valid /
Invalid input
n/a
n/a
n/a
n/a
n/a
Sub menu
of programme
appears
Duration
invalid
Number of
semesters
invalid
Programme
not found
User is
allowed to
exit and
returns to
Main menu
Programme
is successfully
updated
Programme
name
invalid
n/a
n/a
Expected
output
Deletion
operation
(Contd.)
programme code
does not exist in
the database
User does not
Programme with
Duration is not
selected
--
Programme name
is not in the speci-
--
--
310
Software Testing
User exits
Scenario
Name and
description
Scenario 9- Edit
a programme
n/a
n/a
User exits
Deletion not
allowed
TC20 Scenario
14- Delete a
programme
cancelled
TC19 Scenario
13- Delete a
programme
n/a
n/a
n/a
n/a
n/a
n/a
Input 2
Programme
name
Valid /
Invalid input
n/a
Input 1
School
selected
n/a
Programme not
found
n/a
TC18 Scenario
12- Delete a
programme
alternative
TC16 Scenario
10- Delete a
programme
TC17 Scenario
11- Delete a
programme
Test
case
Id
TC15
(Contd.)
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Input 3 Input 4
Duration Number of
semesters
Valid /
Valid /
Invalid
Invalid input
input
n/a
n/a
n/a
n/a
Valid /
n/a
Invalid input
Valid input
Valid input
Valid input
Valid input
Input 5
Edit
Programme
code
Valid /
n/a
Invalid input
n/a
n/a
No
n/a
Yes
operation
User is
allowed to
exit and
returns to
Main menu
--
(Contd.)
Deletion
cancelled
-User is
allowed to
exit and
returns to
Main menu
Programme -is successfully deleted
Programme Programme with
not found
programme code
does not exist in
the database
n/a
Expected
output
Deletion
n/a
n/a
n/a
Programme not
found
n/a
TC23 Scenario
17- View a
programme
User exits
TC21 Scenario
15-View a
programme
TC22 Scenario
16- View a
programme
n/a
n/a
Input 2
Input 3
Programme Duration
name
n/a
n/a
Test Scenario
case Name and
description
Id
Input 1
School
selected
n/a
(Contd.)
n/a
n/a
Input 4
Number of
semesters
n/a
n/a
Valid /
n/a
Invalid input
Valid input
Input 5
Edit
Programme
code
Valid input
n/a
n/a
n/a
n/a
Deletion
User is
allowed to
exit and
returns to
Main menu
--
Expected
output
312
Software Testing
do
do
Scenario 3- Add
a programme
TC5
TC6
TC7
TC8
do
TC4
User exits
Programme code
already exists
Scenario 4- Add
a programme
Invalid entry
do
Scenario 2- Add
a programme
TC3
TC2
BTech(IT)
Yes
Yes
MCA
MCA
MCA
M12Ca
Yes
Yes
Yes
Yes
No
12
13
12d
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Table 6.13.
n/a
n/a
n/a
n/a
n/a
n/a
n/a
User is
allowed to exit
and returns to
Main menu
Programme
code already
exists
Programme
code invalid
Duration
invalid
Number of
semesters
invalid
Programme
name invalid
(Contd.)
contains digits
Duration not
selected
Number of
semesters should
be 6 as duration
of the programme
is 3
Programme code
Programme name
is not in the speci-
User is
-allowed to add
a programme
School not
User did not
selected
select a school
n/a
Expected
output
Deletion
Programme not
found
User exits
TC16 Scenario 10Delete a programme
TC17 Scenario
11- Delete a
programme
Edit cancelled
TC15 Scenario 9- Edit
a programme
n/a
n/a
n/a
BTech(IT)
n/a
n/a
n/a
n/a
n/a
Programme not
found
TC14 Scenario 8- Edit
a programme
BTech(IT)
n/a
TC12 do
BTech(IT)
n/a
n/a
n/a
n/a
n/a
n/a
n/a
13
16
12
13
14
n/a
n/a
n/a
No
n/a
n/a
n/a
n/a
n/a
Yes
n/a
n/a
n/a
n/a
n/a
n/a
Mca123
Yes
n/a
13
n/a
BTech(IT)
n/a
Deletion
Invalid entry
TC11 do
(Contd.)
Remarks (if any)
(Contd.)
-User is
allowed to exit
and returns to
Main menu
Programme is -successfully
deleted
Programme
Programme with
not found
programme code
does not exist in
the database
Programme is -successfully
edited
Programme name
Programme
name invalid is not in the speciin Edit a procontains digits
Duration
Duration not
invalid
selected
Number of
Number of semessemesters
ters should be 8
as duration is 4
Programme
Programme with
not found
programme code
does not exist in
the database
Blank form
User does not
appears
operation
Expected
output
314
Software Testing
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
TC20 Scenario 14Delete a programme alternative
exits
TC21 Scenario 15-View n/a
a programme
n/a
n/a
n/a
n/a
n/a
n/a
16
13
13
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
13
No
n/a
n/a
n/a
n/a
Deletion
(Contd.)
Scheme of the
record already
exists, hence the
programme with
operation
User is
allowed to exit
and returns to
Main menu
Programme
details are
displayed
Programme
not found
programme code
does not exist in
the database
--
Programme with
--
cannot be deleted
-User is
allowed to exit
and returns to
Main menu
Deletion not
allowed
Sub menu of
programme
appears
Expected
output
316
Software Testing
The test cases for other use cases of the URS case study are given in Appendix II.
login Id. A student cannot select more than the required number of elective papers in a
semester. These domain specific issues should be written as validity checks in order to verify
their correctness.
318
Software Testing
Description
VC1
VC2
VC3
VC4
VC5
VC6
VC7
VC8
Validity
check
Number
Login id
Password
Expected output
Remarks
TC1
VC1
10234567899
Rkhj7689
TC2
VC2
Login id cannot
be blank
TC3
VC3
1234
Invalid login id
Login id should
have 11 digits
TC4
VC4
Ae455678521
Invalid login id
Login id cannot
have alphanumeric characters
TC5
VC4
123$4567867 *
Invalid login id
Login id cannot
have special
characters
TC6
VC4
123 45667897 *
Invalid login id
Login id cannot
have blank spaces
TC7
VC5
10234567899
TC8
VC6
10234567899
Ruc
Invalid password
Password cannot
be less than 4
characters in
length
TC9
VC6
10234567899
Password cannot
be greater than
15 characters in
length
TC10
VC7
10234567899
Rty_uyo
Password can
have underscore
character
TC11
VC8
10234567899
Rt yuii
Invalid password
Password cannot
have blank spaces
Additional validity checks are designed in order to validate various inputs in the Change
password form. The Change password form is given in Figure 6.7 and the validity checks are
given in Table 6.16. The corresponding test cases for each validity check are given in Table
6.17.
320
Software Testing
Description
VC9
VC10
VC11
VC12
VC13
VC14
VC15
VC16
VC17
VC18
VC19
VC20
VC21
VC13
VC14
TC7
TC8
*
*
*
10234567899 Ruc_ui
10234567899 Ruc_ui
10234567899 Ruc_ui
10234567899 Ruc_ui
10234567899 Ruc_ui
10234567899 Ruc_ui
TC12 VC17
TC13 VC17
TC14 VC18
TC15 VC19
TC16 VC20
TC17 VC21
10234567899 Rty_uyo
10234567899 Rt yuii
10234567899 Ruc_ui
TC9 VC14
TC10 VC15
TC11 VC16
*
*
*
*
Rty uyo
Rty_uyo
Rty_uyo
Rty_uyo
Rty_uyo
Rtyuiopki1123678 *
Rrk
*
*
10234567899 Rtyuiopki1123678 *
10234567899 Ruc
123$4567867 *
123 45667897 *
10234567899
*
*
*
VC11
VC11
VC12
TC4
TC5
TC6
*
*
*
*
*
*
*
1234
*
Ae455678521 *
VC9
VC10
VC11
TC1
TC2
TC3
New Password
Password
Old password
Table 6.17.
Remarks
Password
new password
Expected output
322
Software Testing
Description
VC1
VC2
VC3
VC4
VC5
VC6
VC7
VC8
VC9
Solution:
Test cases based on validity checks for Maintain school details form are given in Table 6.19.
Table 6.19.
Test
case
Id
TC1
TC3
TC3
VC4
1_*
TC3
VC4
13
TC4
VC5
1012
TC5
VC6
102
TC6
VC7
102
TC7
VC8
103
TC8
VC8
104
TC9
VC9
105
TC10 VC9
106
TC2
University School of
Management Studies
--
User successfully
-adds the school
record
University 434
Invalid school name School name cannot contain
digits
University_school_of_ Invalid school name School name cannot contain
basic_applied_science
special characters
univer
Invalid school name School name cannot contain
less than 10 characters
>50
Invalid school name School name cannot contain
more than 50 characters
324
Software Testing
Example 6.3: Consider the Maintain programme detail form of the URS as given in Figure 6.9.
This form will be accessible only to the system administrator. It will allow him/her to add/edit/
delete/view information about new/existing programme(s) for the school that was selected in the
Programme Details form. Generate the test cases using validity checks given in Table 6.20.
Description
VC1
VC2
VC3
VC4
VC5
VC6
VC7
VC8
VC9
VC10
VC11
VC12
VC13
VC14
Solution:
The test cases based on validity checks of the Maintain Programme Detail form are given in
Table 6.21.
VC6
VC7
VC7
VC8
VC9
VC10
VC11
VC12
VC13
VC13
VC13
VC14
TC6
TC7
TC8
TC9
TC10
TC11
TC12
TC13
TC14
TC15
TC16
TC17
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
MCA
MCA
MCA
MCA
MCA
MCA
MCA
MC1234
MCA
MCA
MCA(SE)
MC_A(Se)
>50
*
*
15
*
*
*
*
*
123
12
1_0
12a
*
*
*
*
*
The validity checks for other forms of the URS case study are given in Appendix III.
VC5
TC5
Test
Validity
School Programme Duration Number of Programme Expected output
case Id check No. selected name
semesters code
TC1
VC2
Yes
MCA
3
6
12
User is allowed to add a
programme
TC2
VC3
*
*
*
*
Please select school
TC3
VC4
Yes
*
*
*
Please enter programme
name
TC4
VC5
Yes
MC
*
*
*
Invalid programme name
Table 6.21.
--
326
Software Testing
We consider the School details form of the URS for the purpose of verifying various
available operations. This form allows the user to add/delete/edit/view information about new/
existing school(s). The School detail form is given in Figure 6.7 with the following fields:
(i) School Name: Alphanumeric of length of 10 to 50 characters. Digits and special
characters are not allowed. Blank spaces between characters are allowed.
(ii) School Code: Numeric and will have a value from 101 to 199.
There are five buttons i.e. Add, Edit, Delete, View and Exit, for various operations. All
operations except Exit require the involvement of the database to get the desired output.
The test cases for all the four operations of the School details form are given in Table 6.22
and testers are expected to verify every step very carefully.
Table 6.22.
ADD OPERATION
Test Input
Expected Output
Actual Status Comments
Case
Output (Pass/
ID
Fail)
1.
Pre-requisites: The administrator should be logged into the system.
the database.
Open the School Details The school details main window
form from the admin menu opens
Select Add from the drop
down menu
Input
Expected Output
Actual
Output
Status
Comments
(Pass/
Fail)
Pre-requisites: The administrator should be logged into the system and some school
details should have been entered.
base.
Open the School
Details form from
the admin menu
Select a school
code from the
drop down list in
the corresponding
Change the
school name,
click the Edit
button
window appears
sage window
3.
Pre-requisites: The administrator should be logged into the system and some school
details should have been entered.
328
Software Testing
(Contd.)
Test
Input
Case ID
Select Edit from
the drop down
menu
Expected Output
Actual
Output
Status
Comments
(Pass/ Fail)
dow appears
Edit operation is cancelled
and database status does
not change
A drop down list appears
The information of the
selected school code should
not have been updated
Test
Input
Expected Output
Actual
Status
Comments
Case ID
Output (Pass/ Fail)
4.
Pre-requisites: The administrator should be logged into the system and some school
details should have been entered.
from the database.
Open the School
Details form from
the admin menu
Select Delete
from the drop down
menu
Select Delete
from the drop down
menu
All the details of the school
Select a School
are displayed
Code from the
drop down list in
the corresponding
(Contd.)
Input
Expected Output
Actual
Output
Status
(Pass/
Fail)
Comments
message window
appears
If no programme of
the school exists
then deletion is
performed and a
message window
appears Record
has been deleted
successfully
5.
330
Software Testing
(Contd.)
Test
Case
ID
Input
Expected Output
Actual
Output
Status
(Pass/
Fail)
Comments
6
Open the School Details
screen
Select Delete from menu
The School
Details main
window opens
List of all school
codes appear in
the drop down list
All the details of
the school are
displayed
message window
appears
Deletion operation is cancelled
and main window
appears
The current form
comes into view
mode
List of the existing school codes
appear. It should
contain the code
of the school
which was not
deleted.
VIEW OPERATION
Test
Case
ID
7
Input
Expected Output
Actual
Output
Status Comments
(Pass/
Fail)
Pre-requisites: The administrator should be logged into the system and some programme details should have been entered.
(Contd.)
Input
Expected Output
Actual
Output
Status
(Pass/
Fail)
Comments
Database testing is very popular in applications where huge databases are maintained and
items are regularly searched, added, deleted, updated and viewed. Many queries are generated
by various users simultaneously and the database should be able to handle them in a reasonable
time frame, for example, web testing, inventory management and other large database
applications. Testing of stress level for a database is a real challenge for the testers. Some
commercially available tools make tall claims about stress testing; however their applicability
is not universally acceptable.
332
Software Testing
EXERCISES
6.1 What is a use case? How is it different from a use case diagram? What are the
components of a use case diagram?
6.2 How do we write use cases? Describe the basic and alternative flows in a use case.
Discuss any popular template for writing a use case.
6.3 Explain the various steps for the generation of test cases from the use cases. Why do
we identify variables in a use case?
6.4 Design a problem statement for library management system and generate the
following:
(i) Use cases
(ii) Use case diagram
(iii) Basic and alternative flows in use cases
(iv) Test cases from use cases.
6.5 Consider the problem of railway reservation system and design the following:
(i) Use cases
(ii) Use case diagram
(iii) Test cases from use cases
What is the role of an actor in use case diagram? Discuss with the help of a suitable
example.
6.6 Discuss the guidelines for the creation of use cases for designing of any system. Is
there any limit for the number of use cases in any system?
6.7 Consider the problem statement of a university registration system as given in Chapter
5. Write the maintain scheme detail use case description and also generate test cases
accordingly.
6.8 What are various strategies for data validity? Discuss with the help of an example.
6.9 Consider the scheme detail form given in chapter 5 of a university registration system.
Write the validity checks and generate test cases from the validity checks.
6.10 What are the guidelines for generating the validity checks? Explain with the help of an
example.
6.11 Why should we do database testing? Write some advantages and applications of data
base testing.
334
Software Testing
6.12 Write the problem statement for library management system. Design test cases for
various operations using database testing.
6.13 Design the test cases for all operations of maintain scheme detail form of university
registration system using database testing.
6.14 Why do we consider domain specific checks very important for generating validity
checks? How are they related with the functionality of the system?
6.15 Why should data validation be given focus in testing? Why do we expect valid data?
How do we prevent the entry of invalid data in a system?
FURTHER READING
Jacobson provides a classic introduction to use case approach:
I.V. Jacobson, Object Oriented Software Engineering: A Use Case Driven
Approach, Pearson Education, 1999.
Cockburn provides guidance for writing and managing use cases. This may help to
reduce common problems associated with use cases:
A. Cockburn, Writing Effective Use Cases, Pearson Education, 2001.
Hurlbuts paper provides a survey on approaches for formalizing and writing use
cases:
R. Hurlbut, A Survey of Approaches for Describing and Formalizing UseCases, Technical Report 9703, Department of Computer Science, Illinois
Institute of Technology, USA, 1997.
Fournier has discussed the relationship between actor and system in:
G. Fournier, Essential Software Testing-A Use Case Approach, CRC Press,
2009.
Other useful texts are available at:
G. Booch, J. Rumbaugh and I.V. Jacobson, The Unified Modeling Language
User Guide, Addison-Wesley, Boston, 1999.
Rational Requisite Pro, Users Guide, Rational Software Corporation, 2003.
Rational Rose Users Guide, IBM Corporation, 2003.
N.R. Tague, The Quality Toolbox, ASQ Quality Press, 2004.
The most current information about UML can be found at:
http://www.rational.com
http://www.omg.org
7
Selection, Minimization and Prioritization of Test
Cases for Regression Testing
336
Software Testing
requirements of the test plan. When we modify software, we typically re-test it. This process
of re-testing is called regression testing.
Hence, regression testing is the process of re-testing the modified parts of the software and
ensuring that no new errors have been introduced into previously tested source code due to
these modifications. Therefore, regression testing tests both the modified source code and other
parts of the source code that may be affected by the change. It serves several purposes like:
Increases confidence in the correctness of the modified program.
Locates errors in the modified program.
Preserves the quality and reliability of the software.
Ensures the softwares continued operation.
We typically think of regression testing as a software maintenance activity; however, we
also perform regression testing during the latter stage of software development. This latter
stage starts after we have developed test plans and test suites and used them initially to test the
software. During this stage of development, we fine-tune the source code and correct errors in
it, hence these activities resemble maintenance activities. The comparison of development
testing and regression testing is given in Table 7.1.
Table 7.1. Comparison of regression and development testing
S.No.
Development Testing
Regression Testing
1.
2.
3.
4.
5.
6.
7.
the software.
Performed under the pressure of
release date.
Separate allocation of budget and
time.
Focus is on the whole software with
the software.
Performed in crisis situations, under greater time
constraints.
Practically no time and generally no separate
budget allocation.
affected portions with the objective of ensuring
8.
Selection, Minimization and Prioritization of Test Cases for Regression Testing 337
failure. After identification of the reason(s), the source code is modified and we generally do
not expect the same failure again. In order to ensure this correctness, we re-test the source code
with a focus on modified portion(s) of the source code and also on affected portion(s) of the
source code due to modifications. We need test cases that target the modified and affected
portions of the source code. We may write new test cases, which may be a time and effort
consuming activity. We neither have enough time nor reasonable resources to write new test
cases for every failure. Another option is to use the existing test cases which were designed for
development testing and some of them might have been used during development testing. The
existing test suite may be useful and may reduce the cost of regression testing. As we all know,
the size of the existing test suite may be very large and it may not be possible to execute all
tests. The greatest challenge is to reduce the size of the existing test suite for a particular
failure. The various steps are shown in Figure 7.1. Hence, test case selection for a failure is the
main key for regression testing.
338
Software Testing
a fault in the modified program. We consider a program given in Figure 7.2 along with its modified
version where the modification is in line 6 (replacing operator * by -). A test suite is also given
in Table 7.2.
1. main( )
1. main ( )
2. {
2. {
3. int a, b, x, y, z;
3. int a, b, x, y, z;
5. x = a + b ;
5. x = a + b;
6. y = a* b;
6. y = a b;
8. z = x / y ;
8. z = x / y ;
9. }
9. }
10. else {
10. else {
11. z = x * y ;
11. z = x * y ;
12. }
12. }
13. printf (z = %d \ n, z );
14. }
14. }
Inputs
a
2
1
3
3
Execution History
b
1
1
2
3
1, 2, 3, 4, 5, 6, 7, 8, 9, 13, 14
1, 2, 3, 4, 5, 6, 7, 8, 9, 13, 14
1, 2, 3, 4, 5, 6, 7, 10, 11, 12, 13, 14
1, 2, 3, 4, 5, 6, 7, 10, 11, 12, 13, 14
In this case, the modified line is line number 6 where a*b is replaced by a-b. All four test
cases of the test suite execute this modified line 6. We may decide to execute all four tests for
the modified program. If we do so, test case 2 with inputs a = 1 and b = 1 will experience a
divide by zero problem, whereas others will not. However, we may like to reduce the number
of test cases for the modified program. We may select all test cases which are executing the
modified line. Here, line number 6 is modified. All four test cases are executing the modified
line (line number 6) and hence are selected. There is no reduction in terms of the number of
test cases. If we see the execution history, we find that test case 1 and test case 2 have the same
execution history. Similarly, test case 3 and test case 4 have the same execution history. We
choose any one test case of the same execution history to avoid repetition. For execution
history 1 (i.e. 1, 2, 3, 4, 5, 6, 7, 8, 10, 11), if we select test case 1, the program will execute
well, but if we select test case 2, the program will experience divide by zero problem. If
several test cases execute a particular modified line, and all of these test cases reach a particular
Selection, Minimization and Prioritization of Test Cases for Regression Testing 339
affected source code segment, minimization methods require selection of only one such test
case, unless they select the others for coverage elsewhere. Therefore, either test case 1 or test
case 2 may have to be selected. If we select test case 1, we miss the opportunity to detect the
fault that test case 2 detects. Minimization techniques may omit some test cases that might
expose fault(s) in the modified program. Hence, we should be very careful in the process of
minimization of test cases and always try to use safe regression test selection technique (if at
all it is possible). A safe regression test selection technique should select all test cases that can
expose faults in the modified program.
340
Software Testing
may be to select those test cases that reveal the difference in the output of the original program
and the modified program. These test cases are known as modification revealing test cases.
These test cases target that portion of the source code which makes the output of the original
program and the modified program differ. Unfortunately, we do not have any effective
technique to do this. Therefore, it is difficult to find fault revealing test cases and modification
revealing test cases.
The reasonable objective is to select all those test cases that traverse the modified source
code and the source code affected by modification(s). These test cases are known as
modification traversing test cases. It is easy to develop techniques for modification traversing
test cases and some are available too. Out of all modification traversing test cases, some may
be modification revealing test cases and out of some modification revealing test cases, some
may be fault revealing test cases. Many modification traversing techniques are available but
their applications are limited due to the varied nature of software projects. Aditya Mathur has
rightly mentioned that [MATH08]:
The sophistication of techniques to select modification traversing tests
requires automation. It is impractical to apply these techniques to large
commercial systems unless a tool is available that incorporates at least one safe
test minimization technique. Further, while test selection appears attractive
from the test effort point of view, it might not be a practical technique when tests
are dependent on each other in complex ways and that this dependency cannot
be incorporated in the test selection tool.
We may effectively implement any test case selection technique with the help of a testing
tool. The modified source code and source code affected by modification(s) may have to be
identified systematically and this selected area of the source code becomes the concern of test
case selection. As the size of the source code increases, the complexity also increases, and need
for an efficient technique also increases accordingly.
Selection, Minimization and Prioritization of Test Cases for Regression Testing 341
code coverage, etc. A minimization technique may further reduce the size of the selected test
cases based on some criteria. We should always remember that any type of minimization is
risky and may omit some fault revealing test cases.
Every reduction activity has an associated risk. All prioritization guidelines should be
designed on the basis of risk analysis. All risky functions should be tested on higher priority.
The risk analysis may be based on complexity, criticality, impact of failure, etc. The most
important is the impact of failure which may range from no impact to loss of human life
and must be studied very carefully.
The simplest priority category scheme is to assign a priority code to every test case. The
priority code may be based on the assumption that test case of priority code 1 is more
important than test case of priority code 2. We may have priority codes as follows:
Priority code 1
Priority code 2
Priority code 3
Priority code 4
Priority code 5
:
:
:
:
:
There may be other ways for assigning priorities based on customer requirements or market
conditions like:
Priority code 1
Priority code 2
Priority code 3
:
:
:
We may design any priority category scheme, but a scheme based on technical considerations
always improves the quality of the product and should always be encouraged.
342
Software Testing
Risk analysis is a process of identifying the potential problems and then assigning a
probability of occurrence of the problem value and impact of that problem value for each
identified problem. Both of these values are assigned on a scale of 1 (low) to 10 (high). A factor
risk exposure is calculated for every problem which is the product of probability of
occurrence of the problem value and impact of that problem value. The risks may be ranked
on the basis of its risk exposure. A risk analysis table may be prepared as given in Table 7.3.
These values may be calculated on the basis of historical data, past experience, intuition and
criticality of the problem. We should not confuse with the mathematical scale of probability
values which is from 0 to 1. Here, the scale of 1 to 10 is used for assigning values to both the
components of the risk exposure.
Table 7.3. Risk analysis table
S. No.
Potential Problem
Probability of
occurrence of problem
Impact of that
Problem
Risk Exposure
1.
2.
3.
4.
The case study of University Registration System given in chapter 5 is considered and its
potential problems are identified. Risk exposure factor for every problem is calculated on the
Selection, Minimization and Prioritization of Test Cases for Regression Testing 343
basis of probability of occurrence of the problem and impact of that problem. The risk
analysis is given in Table 7.4.
Table 7.4. Risk analysis table of University Registration System
S. No.
Potential Problems
Probability of
occurrence of
problem
Impact of
that Problem
Risk
Exposure
1.
2.
12
3.
4.
5.
Unauthorised access
10
10
6.
Database corrupted
18
7.
Ambiguous documentation
8.
9.
10.
344
Software Testing
=
=
=
=
In this case, a risk with high probability value is given more importance than a problem with
high impact value. We may change this and may decide to give more importance to high impact
value over the high probability value and is shown in Figure 7.4. Hence, PC-2 and PC-3 will
swap, but PC-1 and PC-4 will remain the same.
There may be situations where we do not want to give importance to any value and assign
equal importance. In this case, the diagonal band prioritization scheme may be more suitable
as shown in Figure 7.5. This scheme is more appropriate in situations where we have difficulty
in assigning importance to either probability of occurrence of the problem value or impact
of that problem value.
We may also feel that high impact value must be given highest priority irrespective of the
probability of occurrence value. A high impact problem should be addressed first, irrespective
of its probability of occurrence value. This prioritization scheme is given in Figure 7.6. Here,
the highest priority (PC-1) is assigned to high impact value and for the other four quadrants;
any prioritization scheme may be selected. We may also assign high priority to high probability
of occurrence values irrespective of the impact value as shown in Figure 7.7. This scheme may
not be popular in practice. Generally, we are afraid of the impact of the problem. If the impact
value is low, we are not much concerned. In the risk analysis table (see Table 7.4), ambiguous
documentations (S. No. 7) have high probability of occurrence of problem value (8), but
Selection, Minimization and Prioritization of Test Cases for Regression Testing 345
impact value is very low (1). Hence, these faults are not considered risky faults as compared
to unauthorized access (S. No. 5) where probability of occurrence value is very low (1) and
impact value is very high (10).
346
Software Testing
After the risks are ranked, the high priority risks are identified. These risks are required to
be managed first and then other priority risks in descending order. These risks should be
discussed in a team and proper action should be recommended to manage these risks. A risk
matrix has become a powerful tool for designing prioritization schemes. Estimating the
probability of occurrence of a problem may be difficult in practice. Fortunately, all that matters
when using a risk matrix is the relative order of the probability estimates (which risks are more
likely to occur) on the scale of 1 to 10. The impact of the problem may be critical, serious,
moderate, minor or negligible. These two values are essential for risk exposure which is used
to prioritize the risks.
Selection, Minimization and Prioritization of Test Cases for Regression Testing 347
The technique uses two algorithms one for modification and the other for deletion. The
following information is available with us and has been used to design the technique:
(i) Program P with its modified program P .
(ii) Test suite T with test cases t1, t2, t3,..tn.
(iii) Execution history (number of lines of source code covered by a test case) of each test
case of test suite T.
(iv) Line numbers of lines of source code covered by each test case are stored in a two
dimensional array (t11, t12, t13,tij).
Description
1.
T1
2.
modloc
3.
mod_locode
4.
nfound
5.
pos
6.
candidate
7.
priority
The following steps have been followed in order to select and prioritize test cases from test
suite T based on the modification in the program P.
348
Software Testing
Execution history
1, 2, 20, 30, 40, 50
1, 3, 4, 21, 31, 41, 51
5, 6, 7, 8, 22, 32, 42, 52
6, 9, 10, 23, 24, 33, 43, 54
5, 9, 11, 12, 13, 14, 15, 20, 29, 37, 38, 39
15, 16, 17, 18, 19, 23, 24, 25, 34, 35, 36
26, 27, 28, 40, 41, 44, 45, 46
46, 47, 48, 49, 50, 53, 55
55, 56, 57, 58, 59
3, 4, 60
The first portion of the modification algorithm is used to initialize and read values of
variables T1, modloc and mod_locode.
Selection, Minimization and Prioritization of Test Cases for Regression Testing 349
(ii) Repeat for j=1 to length of the test case
If candidate[i] 1 then
Repeat for k=1 to modied lines of source code
If t1[i][j]=mod_locode[k] then
Increment nfound[i] by one
Increment l by one
The status of test cases covering modified lines of source code is given in Table 7.7.
Table 7.7.
Test Cases
T1
T2
T3
T4
T5
T6
T7
T8
T9
T10
1, 2
1
5
5, 15
15, 35
45
55
55
-
2
1
1
0
2
2
1
1
1
0
Consider the third portion of modification algorithm. In this portion, we sort the nfound
array and select the test case with the highest value of nfound as a candidate for selection. The
test cases are arranged in increasing order of priority.
The test cases with less value have higher priority than the test cases with higher value.
Hence, the test cases are sorted on the basis of number of modified lines covered as shown in
Table 7.8.
350
Software Testing
Table 7.8.
Test Cases
Numbers of lines
matched
Number of Matches
(nfound)
Candidate
Priority
T1
T5
T6
T2
T3
T7
T8
T9
T4
T10
1, 2
5, 15
15, 35
1
5
45
55
55
-
2
2
2
1
1
1
1
1
0
0
1
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
The test case with candidate=1 is selected in each iteration. In the fourth portion of the
algorithm, the modified lines of source code included in the selected test case are removed
from the mod_locode array. This process continues until there are no remaining modified lines
of source code covered by any test case.
Since test case T1 is selected and it covers 1 and 2 lines of source code, these lines will be
removed from the mod_locode array.
mod_locode = [1, 2, 5, 15, 35, 45, 55] - [1, 2] = [5, 15, 35, 45, 55]
The remaining iterations of the modification algorithm are shown in tables 7.9-7.12.
Table 7.9.
Test Cases
Matches found
Candidate
Priority
T5
T6
T3
T7
T8
T9
T2
T4
T10
2
2
1
1
1
1
0
0
0
5, 15
15, 35
5
45
55
55
-
1
0
0
0
0
0
0
0
0
2
0
0
0
0
0
0
0
0
mod_locode = [5, 15, 35, 45, 55] [5, 15] = [35, 45, 55]
Selection, Minimization and Prioritization of Test Cases for Regression Testing 351
Table 7.10.
Test Cases
Matches found
Candidate
Priority
T6
35
T7
45
T8
55
T9
55
T2
T3
T4
T10
Matches found
Candidate
Priority
T7
45
T8
55
T9
55
T2
T3
T4
T10
Matches found
Candidate
Priority
T8
55
T9
55
T2
T3
T4
T10
352
Software Testing
Hence test cases T1, T5, T6, T7 and T8 need to be executed on the basis of their
corresponding priority. Out of ten test cases, we need to run only 5 test cases for 100% code
coverage of modified lines of source code. This is 50% reduction of test cases.
Variable
Description
1.
T1
2.
deloc
3.
del_locode
4.
count
5.
match
6.
deleted
Execution history
T1
1, 5, 7, 15, 20
T2
2, 3, 4, 5, 8, 16, 20
T3
T4
1, 2, 5, 8, 17, 19
T5
1, 2, 6, 8, 9, 13
We assume that line numbers 6, 13, 17 and 20 are modified, and line numbers 4, 7 and 15
are deleted from the source code. The information is stored as:
Selection, Minimization and Prioritization of Test Cases for Regression Testing 353
delloc =
del_locode =
modloc =
mod_locode =
3
[4, 7, 15]
4
[6, 13, 17, 20]
After deleting line numbers 4, 7, and 15, the modified execution history is given in Table 7.15.
Table 7.15.
Test case Id
T1
T2
T3
T4
T5
Execution history
1, 5, 20
2, 3, 5, 8, 16, 20
6, 8, 9, 10, 11, 12, 13, 14, 17, 18
1, 2, 5, 8, 17, 19
1, 2, 6, 8, 9, 13
354
Software Testing
The third portion of the algorithm compares lines covered by each test case with lines
covered by other test cases. A two-dimensional array count is used to keep the record of line
number matched in each test case. If all the lines covered by a test case are being covered
by some other test case, then that test case is redundant and should not be selected for
execution.
On comparing all values in each test case with all values of other test cases, we found that
test case 1 and test case 5 are redundant test cases. These two test cases do not cover any line
which is not covered by other test cases as shown in Table 7.16.
Table 7.16.
Test Case
Redundant Y/N
T1
T4
T2
20
T2
T3
T3
T3
T4
T2
13
T3
T5
Selection, Minimization and Prioritization of Test Cases for Regression Testing 355
The remaining test cases are = [T2, T3, T4] and are given in Table 7.17.
Table 7.17.
Test case Id
Execution history
T2
2, 3, 5, 8, 16, 20
T3
T4
1, 2, 5, 8, 17, 19
Now we will minimize and prioritize test cases using modification algorithm given in
section 7.5.2. The status of test cases covering the modified lines is given in Table 7.18.
Table 7.18.
Test Cases
T2
20
T3
6, 13, 17
T4
17
Test cases are sorted on the basis of number of modified lines covered as shown in tables 7.19-7.20.
Table 7.19.
Test Cases
Number of matches
(nfound)
Candidate
Priority
T3
6, 13, 17
T2
20
T4
17
Number of matches
(nfound)
Candidate
Priority
T2
20
T4
Hence, test cases T2 and T3 are needed to be executed and redundant test cases are T1 and T5.
Out of the five test cases, we need to run only 2 test cases for 100% code coverage of
modified code coverage. This is a 60% reduction. If we run only those test cases that cover any
modified lines, then T2, T3 and T4 are selected. This technique not only selects test cases, but
also prioritizes test cases.
356
Software Testing
Example 7.1: Consider the algorithm for deletion and modification of lines of source code in
test cases. Write a program in C to implement, minimize and prioritize test cases using the
above technique.
Solution:
#include<stdio.h>
#include<conio.h>
void main()
{
int t1[50][50]={0};
int count[50][50]={0};
int deleted[50],deloc,del_loc[50],k,c[50],l,num,m,n,match[50],i,j;
clrscr();
for(i=0;i<50;i++){
deleted[i]=0;
match[i]=0;
}
printf("Enter the number of test cases\n");
scanf("%d",&num);
for(i=0;i<num;i++){
printf("Enter the length of test case %d\n",i+1);
scanf("%d",&c[i]);
printf("Enter the values of test case\n");
for(j=0;j<c[i];j++){
scanf("%d",&t1[i][j]);
}
}
printf("\nEnter the deleted lines of code:");
scanf("%d",&deloc);
for(i=0;i<deloc;i++)
{
scanf("%d",&del_loc[i]);
}
for(i=0;i<num;i++){
for(j=0;j<c[i];j++){
for(l=0;l<deloc;l++){
if(t1[i][j]==del_loc[l]){
for(k=j;k<c[i];k++){
t1[i][k]=t1[i][k+1];
}
t1[i][k]=0;
c[i]--;
}
}
Selection, Minimization and Prioritization of Test Cases for Regression Testing 357
}
}
printf("Test case execution history after deletion:\n");
for(i=0;i<num;i++){
printf("T%d\t",i+1);
for(j=0;j<c[i];j++){
printf("%d ",t1[i][j]);
}
printf("\n");
}
for(i=0;i<num;i++){
for(j=0;j<num;j++){
if(i!=j&&deleted[j]!=1){
for(k=0;t1[i][k]!=0;k++){
for(l=0;t1[j][l]!=0;l++){
if(t1[i][k]==t1[j][l])
count[i][k]=1;
}
}
}
}
for(m=0;m<c[i];m++)
if(count[i][m]==1)
match[i]++;
if(match[i]==c[i])
deleted[i]=1;
}
for(i=0;i<num;i++)
if(deleted[i]==1)
printf("Remove Test case %d\n",i+1);
getch();
}
OUTPUT
Enter the number of test cases
5
Enter the length of test case 1
5
Enter the values of test case
1 5 7 15 20
Enter the length of test case 2
7
Enter the values of test case
2 3 4 5 8 16 20
Enter the length of test case 3
10
358
Software Testing
Selection, Minimization and Prioritization of Test Cases for Regression Testing 359
scanf("%d",&t1[i][j]);
}
pos[i]=i;
}
printf("\nEnter number of modied lines of code:");
scanf("%d",&modnum);
printf("Enter the lines of code modied:");
for(i=0;i<modnum;i++)
scanf("%d",&mod[i]);
while(1)
{
count=0;
for(i=0;i<num;i++) {
nfound[i]=0;
pos[i]=i;
}
for(i=0;i<num;i++){
l=0;
for(j=0;j<c[i];j++){
if(candidate[i]!=1){
for(k=0;k<modnum;k++) {
if(t1[i][j]==mod[k]){
nfound[i]++;
found[i][l]=mod[k];
l++;
}
}
}
}
}
l=0;
for(i=0;i<num;i++)
for(j=0;j<num-1;j++)
if(nfound[i]>nfound[j]){
t=nfound[i];
nfound[i]=nfound[j];
nfound[j]=t;
t=pos[i];
pos[i]=pos[j];
pos[j]=t;
}
for(i=0;i<num;i++)
if(nfound[i]>0)
count++;
if(count==0)
360
Software Testing
break;
candidate[pos[0]]=1;
priority[pos[0]]=++m;
printf("\nTestcase\tMatches");
for(i=0;i<num;i++) {
printf("\n%d\t\t%d",pos[i]+1,nfound[i]);
getch();
}
for(i=0;i<c[pos[0]];i++)
for(j=0;j<modnum;j++)
if(t1[pos[0]][i]==mod[j]){
mod[j]=0;
}
printf("\nModied Array:");
for(i=0;i<modnum;i++){
if(mod[i]==0){
continue;
}
else {
printf("%d\t",mod[i]);
}
}
}
count=0;
printf("\nTest case selected.....\n");
for(i=0;i<num;i++)
if(candidate[i]==1){
printf("\nT%d\t Priority%d\n ",i+1,priority[i]);
count++;
}
if(count==0){
printf("\nNone");
}
getch();
}
OUTPUT
Enter the number of test cases:10
Enter the length of test case1:6
Enter the length of test case2:7
Enter the length of test case3:8
Enter the length of test case4:8
Selection, Minimization and Prioritization of Test Cases for Regression Testing 361
362
Software Testing
4
0
2
0
1
0
10
0
Modied Array:35 45 55
Test case Matches
6
1
7
1
8
1
9
1
5
0
1
0
2
0
3
0
4
0
10
0
Modied Array:45 55
Test case Matches
7
1
8
1
9
1
4
0
5
0
6
0
1
0
2
0
3
0
10
0
Modied Array:55
Test case Matches
8
1
9
1
3
0
4
0
5
0
6
0
7
0
1
0
2
0
10
0
Modied Array:
Test case selected.....
T1 Priority1
T5 Priority2
T6 Priority3
T7 Priority4
T8 Priority5
Selection, Minimization and Prioritization of Test Cases for Regression Testing 363
364
Software Testing
EXERCISES
7.1 (a) What is regression testing? Discuss various categories of selective re-test problem.
(b) Discuss an algorithm for the prioritization of test cases.
7.2 What are the factors responsible for requirement changes? How are the requirements
traced?
Selection, Minimization and Prioritization of Test Cases for Regression Testing 365
7.3 Identify the reasons which are responsible for changes in the software. Comment on
the statement change is inevitable.
7.4 Compare regression testing with development testing. Do we perform regression
testing before the release of the software?
7.5 Is it necessary to perform regression testing? Highlight some issues and difficulties of
regression testing.
7.6 Explain the various steps of the regression testing process. Which step is the most
important and why?
7.7 Discuss techniques for selection of test cases during regression testing. Why do we rely
on the selection of test cases based on modification traversing?
7.8 What are selective re-test techniques? How are they different from the retest all
technique?
7.9 What are the categories to evaluate regression test selection technique? Why do we use
such categorization?
7.10 (a) Discuss the priority category schemes for the prioritization of test cases.
(b) What is the role of risk matrix for the reduction of test cases?
7.11 How is risk analysis used in testing? How can we prioritize test cases using risk
factor?
7.12 What is a risk matrix? How do we assign thresholds that group the potential problems
into priority categories?
7.13 Explain the following:
(a) Modification traversing test cases
(b) Modification revealing test cases
7.14 What is the difference between general test case prioritization and version specific test
case prioritization? Discuss any prioritization technique with the help of an example.
7.15 Explain the code coverage prioritization technique. What are the test cases selection
criteria? Write the modification algorithm which is used to minimize and prioritize test
cases.
FURTHER READING
Many test cases may be generated using test design techniques. Applying risk analysis
may help the software tester to select the most important test cases that address the
most significant features. Tamres describes risk analysis in order to prioritize test
cases:
L. Tamres, Introduction to Software Testing, Pearson Education, 2005.
The following article provides a full account on design and maintenance of behavioural
regression test suites that may help to change code with confidence:
Nada daVeiga, Change Code without Fear: Utilize a Regression Safety Net,
DDJ, February 2008.
Useful recommendations x on regression testing by Microsoft can be obtained from:
Microsoft regression testing recommendations, http://msdn.microsoft.com/en-us/
library/aa292167(VS.71).aspx
366
Software Testing
Selection, Minimization and Prioritization of Test Cases for Regression Testing 367
The following research paper provides an excellent comparison in order to analyze the
costs and benefits of several regression test selection algorithms:
T.L. Graves, M.J. Harrold, J.M. Kim, A. Porter, G. Rothermel, An Empirical
Study: Regression Test Selection Techniques, ACM Transactions on Software
Engineering and Methodology, vol. 10 , no. 2, pp. 180208, April 2001.
Other similar study includes:
Gregg Rothermel, Mary Jean Harrold, Jeffery von Ronne, Christie Hong,
Empirical Studies of Test-suite Reduction, Software Testing, Verification and
Reliability, vol. 12, no. 4, pp. 219249, 2002.
8
Software Testing Activities
We start testing activities from the first phase of the software development life cycle. We may
generate test cases from the SRS and SDD documents and use them during system and
acceptance testing. Hence, development and testing activities are carried out simultaneously in
order to produce good quality maintainable software in time and within budget. We may carry
out testing at many levels and may also take help of a software testing tool. Whenever we
experience a failure, we debug the source code to find reasons for such a failure. Finding the
reasons for a failure is a very significant testing activity and consumes a huge amount of
resources and may also delay the release of the software.
370
Software Testing
Coupling increases as the number of calls amongst units increases or the amount of shared data
increases. A design with high coupling may have more errors. Loose coupling minimizes the
interdependence, and some of the steps to minimize coupling are given as:
(i)
(ii)
(iii)
(iv)
(v)
(vi)
(vii)
Different types of coupling are data (best), stamp, control, external, common and content (worst).
When we design test cases for interfaces, we should be very clear about the coupling amongst units
and if it is high, a large number of test cases should be designed to test that particular interface.
A good design should have low coupling and thus interfaces become very important. When
interfaces are important, their testing will also be important. In integration testing, we focus on
the issues related to interfaces amongst units. There are several integration strategies that really
have little basis in a rational methodology and are given in Figure 8.4. Top down integration
starts from the main unit and keeps on adding all called units of the next level. This portion
should be tested thoroughly by focusing on interface issues. After completion of integration
testing at this level, add the next level of units and so on till we reach the lowest level units
(leaf units). There will not be any requirement of drivers and only stubs will be designed. In
bottom-up integration, we start from the bottom, (i.e. from leaf units) and keep on adding upper
level units till we reach the top (i.e. root node). There will not be any need of stubs. A sandwich
372
Software Testing
strategy runs from top and bottom concurrently, depending upon the availability of units and
may meet somewhere in the middle.
(a) Top down integration (focus starts from edges a, b, c and so on)
Each approach has its own advantages and disadvantages. In practice, sandwich integration
approach is more popular. This can be started as and when two related units are available. We
may use any functional or structural testing techniques to design test cases.
Functional testing techniques are easy to implement with a particular focus on the interfaces
and some structural testing techniques may also be used. When a new unit is added as a part
of integration testing, then the software is considered as a changed software. New paths are
designed and new input(s) and output(s) conditions may emerge and new control logic may be
invoked. These changes may also cause problems with units that previously worked
flawlessly.
374
Software Testing
(like operating systems, compilers, case tools, etc.), then acceptance testing is not feasible. In
such cases, potential customers are identified to test the software and this type of testing is
called alpha / beta testing. Beta testing is done by many potential customers at their sites
without any involvement of developers / testers. However, alpha testing is done by some
potential customers at the developers site under the direction and supervision of developers
testers.
8.2 DEBUGGING
Whenever a software fails, we would like to understand the reason(s) for such a failure. After
knowing the reason(s), we may attempt to find the solution and may make necessary changes
in the source code accordingly. These changes will hopefully remove the reason(s) for that
software failure. The process of identifying and correcting a software error is known as
debugging. It starts after receiving a failure report and completes after ensuring that all
corrections have been rightly placed and the software does not fail with the same set of
input(s). The debugging is quite a difficult phase and may become one of the reasons for the
software delays.
Every bug detection process is different and it is difficult to know how long it will take to
detect and fix a bug. Sometimes, it may not be possible to detect a bug or if a bug is detected,
it may not be feasible to correct it at all. These situations should be handled very carefully. In
order to remove bugs, developers should understand that a problem prevails and then he/she
should do the classification of the bug. The next step is to identify the location of the bug in
the source code and finally take the corrective action to remove the bug.
Replication of the bug: The first step in fixing a bug is to replicate it. This means
to recreate the undesired behaviour under controlled conditions. The same set of
input(s) should be given under similar conditions to the program and the program after
execution, should produce a similar unexpected behaviour. If this happens, we are
able to replicate a bug. In many cases, this is simple and straight forward. We execute
the program on a particular input(s) or we press a particular button on a particular
dialog, and the bug occurs. In other cases, replication may be very difficult. It may
require many steps or in an interactive program such as a game, it may require precise
timing. In worst cases, replication may be nearly impossible. If we do not replicate the
bug, how will we verify the fix? Hence, failure to replicate a bug is a real problem.
If we cannot do it, any action, which cannot be verified, has no meaning, howsoever
important it may be. Some of the reasons for non-replication of a bug are:
The user incorrectly reported the problem.
The program has failed due to hardware problems like memory overflow, poor
network connectivity, network congestion, non-availability of system buses,
deadlock conditions, etc.
The program has failed due to system software problems. The reason may be the
usage of a different type of operating system, compilers, device drivers, etc. There
may be any of the above-mentioned reasons for the failure of the program,
although there is no inherent bug in the program for this particular failure.
Our effort should be to replicate the bug. If we cannot do so, it is advisable to keep
the matter pending till we are able to replicate it. There is no point in playing with the
source code for a situation which is not reproducible.
376
Software Testing
(ii)
Sometimes simple print statements may help us to locate the sources of the bad
behaviour. This simple way provides us the status of various variables at different
locations of the program with a specific set of inputs. A sequence of print statements
may also portray the dynamics of variable changes. However, it is cumbersome to use
in large programs. They may also generate superfluous data which may be difficult to
analyze and manage.
We may add check routines in the source code to verify the correctness of the data
structures. This may help us to know the problematic areas of the source code. If
execution of these check routines is not very time consuming, then we may always add
them. If it is time consuming, we may design a mechanism to make them operational,
whenever required.
The most useful and powerful way is to inspect the source code. This may help
us to understand the program, understand the bug and finally locate the bug. A clear
understanding of the program is an absolute requirement of any debugging activity.
Sometimes, the bug may not be in the program at all. It may be in a library routine
or in the operating system, or in the compiler. These cases are very rare, but there are
chances and if everything fails, we may have to look for such options.
(iv) Fix the bug and re-test the program
After locating the bug, we may like to fix the bug. The fixing of a bug is a programming
exercise rather than a debugging activity. After making necessary changes in the source
code, we may have to re-test the source code in order to ensure that the corrections
have been rightly done at right place. Every change may affect other portions of the
source code too. Hence an impact analysis is required to identify the affected portion
and that portion should also be re-tested thoroughly. This re-testing activity is called
regression testing which is a very important activity of any debugging process.
(ii)
378
Software Testing
of the bracketed location may help us to rectify the bug. Another obvious variation
of backtracking is forward tracking, where we use print statements or other means to
examine a succession of intermediate results to determine at what point the result first
became wrong. These approaches (backtracking and forward tracking) may be useful
only when the size of the program is small. As the program size increases, it becomes
difficult to manage these approaches.
(iii) Brute Force
This is probably the most common and efficient approach to identify the cause of
a software failure. In this approach, memory dumps are taken, run time traces are
invoked and the program is loaded with print statements. When this is done, we may
find a clue by the information produced which leads to identification of cause of a
bug. Memory traces are similar to memory dumps, except that the printout contains
only certain memory and register contents and printing is conditional on some event
occurring. Typically conditional events are entry, exit or use of one of the following:
A particular subroutine, statement or database
Communication with I/O devices
Value of a variable
Timed actions (periodic or random) in certain real time system.
A special problem with trace programs is that the conditions are entered in the source
code and any changes require a recompilation. A huge amount of data is generated,
which, although may help to identify the cause, but may be difficult to manage and
analyze.
(iv) Cause Elimination
Cause elimination is manifested by induction or deduction and also introduces the
concept of binary partitioning. Data related to error occurrence are organized to isolate
potential causes. Alternatively, a list of all possible causes is developed and tests are
conducted to eliminate each. Therefore, we may rule out causes one by one until a
single one remains for validation. The cause is identified, properly fixed and re-tested
accordingly.
Run time debuggers may detect bugs in the program, but may fail to find the causes of
failures. We may need a special tool to find causes of failures and correct the bug. Some errors
like memory corruption and memory leaks may be detected automatically. The automation was
the modification in the debugging process because it automated the process of finding the bug.
A tool may detect an error and our job is to simply fix it. These tools are known as automatic
debugger and are available in different varieties. One variety may be a library of functions that
may be connected into the program. During execution of the program, these functions are
called and the debugger looks for memory corruption and other similar issues. If anything is
found, it is reported accordingly.
Compilers are also used for finding bugs. Of course, they check only syntax errors and
particular types of run time errors. Compilers should give proper and detailed messages of
errors that will be of great help to the debugging process. Compilers may give all such
information in the attribute table, which is printed along with the listing. The attribute table
contains various levels of warnings which have been picked up by the compiler scan and which
are noted. Hence, compilers come with an error detection feature and there is no excuse to
design compilers without meaningful error messages.
We may apply a wide variety of tools like run time debugger, automatic debugger, automatic
test case generators, memory dumps, cross reference maps, compilers, etc. during the
debugging process. However, tools are not the substitute for careful examination of the source
code after thorough understanding.
380
Software Testing
all know, static testing is about prevention and dynamic testing is about cure. We should use
both the tools but prevention is always better than cure. These tools will find more bugs as
compared to dynamic testing tools (where we execute the program). There are many areas for
which effective static testing tools are available, and they have shown their results for the
improvement of the quality of the software.
(i)
are simple and may find many critical and weak areas of the program. They may also
suggest possible changes in the source code for improvement.
382
Software Testing
and stress testing. Some of the popular tools are Mercury Interactives Load Runner,
Apaches J Meter, Segue Softwares Silk Performer, IBM Rationals Performance
Tester, Comuwares QALOAD and AutoTesters AutoController.
(vii) Functional / Regression Testing Tools
These tools are used to test the software on the basis of its functionality without
considering the implementation details. They may also generate test cases automatically
and execute them without human intervention. Many combinations of inputs may be
considered for generating test cases automatically and these test cases may be executed,
thus, relieving us from repeated testing activities. Some of the popular available tools
are IBM Rationals Robot, Mercury Interactives Win Runner, Comuwares QA Centre
and Segue Softwares Silktest.
384
Software Testing
386
Software Testing
EXERCISES
8.1 What are the various levels of testing? Explain the objectives of every level. Who
should do testing at every level and why?
8.2 Is unit testing possible or even desirable in all circumstances? Justify your answer with
examples.
8.3 What is scaffolding? Why do we use stubs and drivers during unit testing?
8.4 What are the various steps to minimize the coupling amongst various units? Discuss
different types of coupling from the best coupling to the worst coupling.
8.5 Compare the top down and bottom up integration testing approaches to test a
program.
8.6 What is debugging? Discuss two debugging techniques. Write features of these
techniques and compare the important features.
8.7 Why is debugging so difficult? What are the various steps of a debugging process?
8.8 What are the popular debugging approaches? Which one is more popular and why?
8.9 Explain the significance of debugging tools. List some commercially available
debugging tools.
8.10 (a) Discuss the static and dynamic testing tools with the help of examples.
(b) Discuss some of the areas where testing cannot be performed effectively without
the help of a testing tool.
8.11 Write short notes on:
(i) Coverage analysis tools
(ii) Performance testing tools
(iii) Functional / Regression testing tools
8.12 What are non-functional requirements? How can we use software tools to test these
requirements? Discuss some popular tools along with their areas of applications.
8.13 Explain stress, load and performance testing.
8.14 Differentiate between the following:
(a) Integration testing and system testing
(b) System testing and acceptance testing
(c) Unit testing and integration testing
(d) Testing and debugging
8.15 What are the objectives of process management tools? Describe the process of selection
of such a tool. List some commercially available process management tools.
8.16 What is the use of a software test plan document in testing? Is there any standard
available?
8.17 Discuss the outline of a test plan document as per IEEE Std 829-1998.
8.18 Consider the problem of the URS given in chapter 5, and design a software test plan
document.
8.19 Which is the most popular level of testing a software in practice and why?
8.20 Which is the most popular integration testing approach? Discuss with suitable
examples.
FURTHER READING
The IEEE standard on software unit testing presents a standard approach to unit testing
that can be used as sound software engineering practice. It also provides guidelines for
a software practitioner for implementation and usage of software unit testing:
IEEE Standards Board, IEEE Standard for Software Unit Testing: An American
National Standard, ANSI/IEEE Std 10081987.
For a comprehensive set of 27 guidelines on unit testing refer to:
Unit Testing Guidelines, Geotechnical Software Services, http://geosoft.no/
development/unittesting.html, 2007.
An excellent book on The Art of Unit Testing was written by Osherove:
R. Osherove, The Art of Unit Testing, Manning Publications.
The following survey paper defines a number of practices that may be followed during
unit testing:
Per Runeson, A Survey of Unit Testing Practices, IEEE Software, vol. 23, no.
4, pp. 2229, July/Aug. 2006, doi:10.1109/MS.2006.91.
Agans provides nine main debugging rules and several sub-debugging rules. These
sub-rules are derived from common sense and several years of experience:
David J. Agans, Debugging: The Nine Indispensable Rules for Finding Even the
Most Elusive Software and Hardware Problems, AMACOM, 2002.
An introduction to debugging approaches may be found in Chapter 7 in Myers book:
G.J Myers, The Art of Software Testing, John Wiley & Sons, 2004.
A software practitioner spends lots of time in identifying and fixing bugs. The essay
written by Taylor provides a good discussion on debugging:
Ian Lance Taylor, Debugging, http://www.airs.com/ian/, 2003.
Other similar books include:
John Robbins, Debugging Applications, Microsoft Press, 2000.
Matthew A. Telles, Yuan Hsieh, The Science of Debugging, The Coriolis
Group, 2001.
Dmitry Vostokov, Memory Dump Analysis Anthology, vol. 1, OpenTask,
2008.
388
Software Testing
9
Object Oriented Testing
What is object orientation? Why is it becoming important and relevant in software development?
How is it improving the life of software developers? Is it a buzzword? Many such questions
come into our mind whenever we think about object orientation of software engineering.
Companies are releasing the object oriented versions of existing software products. Customers
are also expecting object oriented software solutions. Many developers are of the view that
structural programming, modular design concepts and conventional development approaches
are old fashioned activities and may not be able to handle todays challenges. They may also
feel that real world situations are effectively handled by object oriented concepts using
modeling in order to understand them clearly. Object oriented modeling may improve the
quality of the SRS document, the SDD document and may help us to produce good quality
maintainable software. The software developed using object orientation may require a different
set of testing techniques, although few existing concepts may also be applicable with some
modifications.
390
Software Testing
perform this task. We are not required to know the details of the operations to be performed to
complete this task. Our interest is very limited and focused. If we investigate further, we may
come to know that there are many ways to send a book like using train network, bus network, air
network or combinations of two or more available networks. The selection of any method is the
prerogative of our agent (say, agent of Fast Track Company). The agent may transfer the book to
another agent with the delivery address and a message to transfer to the next agent and so on. Our
task may be done by a sequence of requests from one agent to another.
In object orientation, an action is initiated by sending a message (request) to an agent who is
responsible for that action. An agent acts as a receiver and if it accepts a message (request), it
becomes its responsibility to initiate the desired action using some method to complete the task.
In real world situations, we do not need to know all operations of our agents to complete the
assigned task. This concept of information hiding with respect to message passing has
become very popular in object oriented modeling. Another dimension is the interpretation of
the message by the receiver. All actions are dependent upon the interpretation of the received
message. Different receivers may interpret the same message differently. They may decide to
use different methods for the same message. Fast Track Agency may use air network while
another agency may use train network and so on. If we request our tailor to send the book, he
may not have any solution for our problem. The tailor will only deny such requests. Hence, a
message should be issued to a proper agent (receiver) in order to complete the task. Object
orientation is centered around a few basic concepts like objects, classes, messages, interfaces,
inheritance and polymorphism. These concepts help us to model a real world situation which
provides the foundation for object oriented software engineering.
All objects have unique identification and are distinguishable. There may be four horses
having same attributes colour, breed and size but all are distinguishable due to their colour. The
term identity means that objects are distinguished by their inherent existence and not by
descriptive properties [JOSH03].
What types of things become objects? Anything and everything may become an object. In
our example, customer, courier and tracking are nothing but objects. The class of an object
provides the structure for the object i.e. its state and operations. A courier class is shown in
Figure 9.2 with eight attributes and four operations.
An attribute (or information / state) is a data value held by the object of a class. The courier
may have different height, weight, width, length, description and it may be delivered or not.
The attributes are shown as the second part of the class courier as given in Figure 9.2.
Operations (or behaviour) are the functions which may be applied on a class. All objects of a
class have the same operations. A class courier shown in Figure 9.2 has four operations namely
addcourierdetail, deletecourier, updatecourier and viewstatus. These four operations are
defined for a Class Courier in the Figure 9.2. In short, every object has a list of functions
(operation part) and data values required to store information (Attribute part).
I. Jacobson has defined a class as [JACO98]:
A class represents a template for several objects and describes how these
objects are structured internally. Objects of the same class have the same
definition both for their operations and for their information structures.
In an object oriented system, every object has a class and the object is called an instance of
that class. We use object and instance as synonyms and an object is defined as [JACO98]:
An instance is an object created from a class. The class describes the
(behaviour and information) structure of the instance, while the current state
of the instance is defined by the operations performed on the instance.
9.1.2 Inheritance
We may have more information about Fast Track Courier Company not necessarily because it
is a courier company but because it is a company. As a company, it will have employees,
balance sheet, profit / loss account and a chief executive officer. It will also charge for its
392
Software Testing
services and products from the customers. These things are also true for transport companies,
automobile companies, aircraft companies, etc. Since the category courier company is a more
specialized form of the category company and any knowledge of a company is also true for
a courier company and subsequently also true for Fast Track Courier Company.
We may organize our knowledge in terms of hierarchy of categories as shown in Figure
9.3.
394
Software Testing
ignition, clutch, break, gear system without knowing any details of the type of engine, batteries,
control system, etc. These details may not be required for a learner and may create unnecessary
confusion. Hence, abstraction concept provides independence and improves the clarity of the
system.
9.1.4 Polymorphism
The dictionary meaning of polymorphism is many forms. In the real world, the same
operations may have different meanings in different situations. For example, Human is a subclass of Mammal. Similarly Dog, Bird, Horse are also sub-classes of Mammal. If a
message come fast is issued to all mammals, all may not behave in the same way. The horse
and dog may run, the bird may fly and the human may take an aircraft. The behaviour of
mammals is different on the same message. This concept is known as polymorphism, where
the same message is sent to different objects irrespective of their class, but the responses of
objects may be different. When we abstract the interface of an operation and leave the
implementation details to sub-classes, this activity is called polymorphism. This operation is
called polymorphic operation. We may create a super class by pulling out important states,
behaviours and interfaces of the classes. This may further simplify the complexity of a
problem. An object may not need to know the class of another object to whom it wishes to send
a message, when we have polymorphism. This may be defined as [JACO98]:
Polymorphism means that the sender of a stimulus (message) does not need
to know the receiving instances class. The receiving instance can belong to
an arbitrary class.
Polymorphism is considered to be an important concept of any object oriented programming
language. As we all know, arithmetic operators such as +, =, - are used to operate on primary
data types such as int, float, etc. We may overload these operators so that they may operate in
the same way on objects (user defined data types) as they operate on primary data types. Thus,
the same operators will have multiple forms.
9.1.5 Encapsulation
Encapsulation is also known as information hiding concept. It is a way in which both data and
functions (or operations) that operate on data are combined into a single unit. The only way to
access the data is through functions, which operate on the data. The data is hidden from the
external world. Hence, it is safe from outside (external) and accidental modifications. For
example, any object will have attributes (data) and operations which operate on the specified
data only.
If data of any object needs to be modified, it may be done through the specified functions
only. The process of encapsulating the data and functions into a single unit simplifies the
activities of writing, modifying, debugging and maintaining the program.
In a university, every school may access and maintain its data on its own. One school is not
allowed to access the data of another school directly. This is possible only by sending a request
to the other school for the data. Hence, the data and functions that operate on the data are
specific to each school and are encapsulated into a single unit that is the school of a
university.
In order to test a class, we may create an instance of the class i.e. object, and pass the
appropriate parameters to the constructor. We may further call the methods of the object
passing parameters and receive the results. We should also examine the internal data of the
object. The encapsulation plays an important role in class testing because data and function
(operations) are combined in a class. We concentrate on each encapsulated class during unit
testing but each function may be difficult to test independently. Inter-class testing considers the
396
Software Testing
parameter passing issues between two classes and is similar to integration testing. System
testing considers the whole system and test cases are generated using functional testing
techniques.
Integration testing in object oriented system is also called inter-class testing. We do not have
hierarchical control structure in object orientation and thus conventional integration testing
techniques like top down, bottom up and sandwich integration cannot be applied. There are
three popular techniques for inter-class testing in object oriented systems. The first is the thread
based testing where we integrate classes that are needed to respond to an input given to the
system. Whenever we give input to a system, one or more classes are required to be executed
that respond to that input to get the result. We combine such classes which execute together for
a particular input or set of inputs and this is treated as a thread. We may have many threads in
a system, depending on various inputs. Thread based testing is easy to implement and has
proved as an effective testing technique. The second is the use case based testing where we
combine classes that are required by one use case.
The third is the cluster testing where we combine classes that are required to demonstrate
one collaboration. In all three approaches, we combine classes on the basis of a concept and
execute them to see the outcome. Thread based testing is more popular due to its simplicity and
easy implementability.
The advantage of object oriented system is that the test cases can be generated earlier in the
process, even when the SRS document is being designed. Early generation of test cases may
help the designers to better understand and express requirements and to ensure that specified
requirements are testable. Use cases are used to generate a good number of test cases. This
process is very effective and also saves time and effort. Developers and testers understand
requirements clearly and may design an effective and stable system. We may also generate test
cases from the SDD document. Both the teams (testers and developers) may review the SRS
and the SDD documents thoroughly in order to detect many errors before coding. However,
testing of source code is still a very important part of testing and all generated test cases will
be used to show their usefulness and effectiveness. We may also generate test cases on the basis
of the availability of the source code.
Path testing, state based testing and class testing are popular object oriented testing
techniques and are discussed in subsequent sections.
activities are performed. This is similar to a flow graph which is the basis of conventional path
testing. Activity diagram may be generated from a use case or from a class. It may represent
basic flow and also possible alternative flows. As shown in Figure 9.5, the start state is
represented by a solid circle and the end state is represented by a solid circle inside a circle.
The activities are represented by rectangles with rounded corners along with their descriptions.
Activities are nothing but the set of operations. After execution of these set of activities, a
transition takes place to another activity. Transitions are represented by an arrow. When
multiple activities are performed simultaneously, the situation is represented by a symbol
fork. The parallel activities are combined after the completion of such activities by a symbol
join. The number of fork and join in an activity diagram are the same. The branches are used
to describe what activities are performed after evaluating a set of conditions. Branches may
also be represented as diamonds with multiple labelled exit arrows. A guard condition is a
boolean expression and is also written along with branches. An activity diagram consisting of
seven activities is shown in Figure 9.5.
In the activity diagram given in Figure 9.5, Activity 2 and Activity 3 are performed
simultaneously and combined by a join symbol. After Activity 4, a decision is represented by
a diamond symbol and if the guard condition is true, Activity 5 is performed, otherwise
Activity 6 is performed. The fork has one incoming transition (Activity 1 is split into subactivities) and two outgoing transitions. Similarly join has two incoming transitions and one
outgoing transition. The symbols of an activity diagram are given in Table 9.1.
398
Software Testing
Symbol
Notation
Remarks
1.
Fork
2.
Join
3.
Transition
of control from one activity to
another.
4.
Activity
5.
Start
6.
End
7.
Branch
An activity diagram represents the flow of activities through the class. We may read the
diagram from top to bottom i.e. from start symbol to end symbol. It provides the basis for the
path testing where we may like to execute each independent path of the activity diagram at
least once.
We consider the program given in Figure 9.6 for determination of division of a student. We
give marks in three subjects as input to calculate the division of a student. There are three
methods in this program getdata, validate and calculate. The activity diagram for validate and
calculate functions is given in Figure 9.7 and Figure 9.8.
#include<iostream.h>
#include<conio.h>
class student
{
int mark1;
int mark2;
int mark3;
public:
void getdata()
{
cout<<"Enter marks of 3 subjects (between 0-100)\n";
cout<<"Enter marks of rst subject:";
cin>>mark1;
(Contd.)
400
Software Testing
mark1
mark2
mark3
Path
1.
101
40
50
Invalid marks
2.
90
75
75
calculate()
402
Software Testing
mark1
mark2
mark3
Path
1.
2.
3.
4.
5.
40
45
55
70
80
30
47
57
65
85
40
48
60
60
78
Fail
Third division
Second division
First division
First division with distinction
Example 9.1: Consider the program given in Figure 9.9 for determination of the largest amongst
three numbers. There are three methods in this program getdata, validate and maximum. Design
test cases for validate and maximum methods of the class using path testing.
#include<iostream.h>
#include<conio.h>
class greatest
{
oat A;
oat B;
oat C;
public:
void getdata()
{
cout<<Enter number 1:\n;
cin>>A;
cout<<Enter number 2:\n;
cin>>B;
cout<<Enter number 3:\n;
cin>>C;
}
void validate()
{
if(A<0||A>400||B<0||B>400||C<0||C>400){
cout<<Input out of range;
}
else{
maximum();
}
}
void maximum();
};
void greatest::maximum()
{
/*Check for greatest of three numbers*/
if(A>B) {
if(A>C) {
cout<<A;
}
else {
cout<<C;
}
}
else {
if(C>B) {
cout<<C;
}
else {
cout<<B;
}
}
}
void main()
{
clrscr();
greatest g1;
g1.getdata();
g1.validate();
getch();
}
Figure 9.9. Program to determine largest among three numbers
Solution:
The activity diagram for validate and calculate functions is given in Figure 9.10 and Figure
9.11 and their test cases are shown in Table 9.4 and Table 9.5.
Path
1.
500
40
50
2.
90
75
75
maximum()
404
Software Testing
Expected output
1.
100
87
56
100
2.
87
56
100
100
3.
56
87
100
100
4.
87
100
56
100
along with its responses to these events or methods. A state is represented by rectangles with
rounded corners and transitions are represented by edges (arrows). Events and actions are
represented by annotations on the directed edges. A typical state machine is shown in Figure
9.12 and descriptions of its associated terms are given in Table 9.6.
Terminologies used in
statechart diagram
Description
Remarks
1.
State
State1, state2
2.
Event
A,B
3.
Action
Next, previous
4.
Transition
5.
Guard condition
In the Figure 9.12, there are two states state1 and state2. If at state1, input A is given and
(x>y), then state1 is changed to state2 with an output next. At state2, if the input is B and
(x<y), then state2 is changed to state1 with an output previous. Hence, a transition transfers
a system from one state to another state. The first state is called the accepting state and another
is called the resultant state. Both states (accepting and resultant) may also be the same in case
of self-loop conditions. The state in question is the current state or present state. Transition
occurs from the current state to the resultant state.
We consider an example of a process i.e. program under execution that may have the
following states:
406
Software Testing
The state machine for life cycle of a process is shown in Figure 9.13. There are six states in
the state machine new, ready, running, time expired, waiting and terminated. The waiting
state is decomposed into three concurrent sub-states I/O operation, child process and interrupt
process. The three processes are separated by dashed lines. After the completion of these substates the flow of control joins to the ready state.
Two special states are used i.e. (alpha) and (omega) state for representing the constructor
and destructor of a class. These states may simplify testing of multiple constructors, exception
handling and destructors. Binder [BIND99] has explained this concept very effectively as:
The state is a null state representing the declaration of an object before its
construction. It may accept only a constructor, new, or a similar initialization
message. The state is reached after an object has been destructed or deleted,
or has gone out of scope. It allows for explicit modeling and systematic
testing of destructors, garbage collection, and other termination actions.
Alpha and omega states are different from start state and end state of a state chart diagram.
These are additional states to make things more explicit and meaningful.
We consider an example of a class stack where two operations push and pop, are
allowed. The functionality of a stack suggests three states empty, holding and full. There are
four events new, push, pop and destroy, with the following purposes:
(i)
(ii)
(iii)
(iv)
The state chart diagram for class stack is given in the Figure 9.14.
408
Software Testing
states are more than 10 or 15, it is difficult to keep track of various transitions. In practice, we
may have to handle systems with 100 states or more. State transition tables are used when the
number of states is more these tables and provide information in a compact tabular form. In
state transition tables, rows represent the present acceptable state and columns represent the
resultant state. The state transition table of a class stack is given in Table 9.7.
State transition tables represent every transition, event and action and may help us to design
the test cases.
Table 9.7. State transition table for stack class
State
Event/method
Resultant state
Alpha
Alpha
Empty
Holding
Full
Omega
new
push(x)
pop()
destroy
Empty
new
push(x)
pop()
destroy
Holding
new
push(x)
pop()
destroy
Full
new
push(x)
pop()
destroy
1.1
New
Empty
1.2
Push(x)
Holding
1.3
Pop()
1.4
destroy
Event (method)
Expected result
Test condition
Top=1
Action
State
Return x
Empty
Omega
2.1
New
Empty
2.2
Push(x)
Holding
2.3
Pop()
2.4
destroy
Omega
3.1
New
Empty
3.2
Push(x)
3.3
Push(x)
holding
3.4
destroy
Omega
Top>1
Return x
holding
Top<max-1
Holding
4.1
New
Empty
4.2
Push(x)
Holding
4.3
Push(x)
4.4
Pop()
Holding
4.5
destroy
Omega
Top=max-1
Full
5.1
New
Empty
5.2
Push(x)
Holding
5.3
Push(x)
5.4
destroy
omega
6.1
New
empty
6.2
destroy
Omega
Top=max-1
Full
Test condition
Expected result
Test state
Test event
Action
7.0
Empty
New
Illegal exception
8.0
Empty
Pop()
Illegal exception
9.0
Holding
New
Illegal exception
10.0
Holding
Push (top=max)
Illegal exception
11.0
Holding
Pop (top=0)
Illegal exception
12.0
Full
New
Illegal exception
13.0
Full
Push
Illegal exception
14.0
0mega
any
Illegal exception
410
Software Testing
Example 9.2: Consider the example of withdrawing cash from an ATM machine. The process
consists of the following steps:
(i)
(ii)
The customer will be asked to insert the ATM card and enter the PIN number.
If the PIN number is valid, the withdrawal transaction will be performed:
(a) The customer selects amount.
(b) The system verifies that it has sufficient money to satisfy the request; then the
appropriate amount of cash is dispensed by the machine and a receipt is issued.
(c) If sufficient amount is not available in the account, a message Balance not
sufficient is issued.
(iii) If the bank reports that the customers PIN is invalid, then the customer will have to
re-enter the PIN.
Draw a Statechart diagram and generate test cases using state based testing.
Solution:
State chart diagram for withdrawal of cash from an ATM machine is shown in Figure 9.15 and
test cases are given in Table 9.10.
1.1
New
1.2
Getting information
Enter pin
1.3
Validate
Validating PIN
1.4
Disapproved
1.5
Transaction not
completed
1.6
Destroy
Omega
2.1
New
Insert card
2.2
Getting information
Enter pin
2.3
Validate
Validating
2.4
Approved
Enter amount
2.5
Validate
Validating balance
2.6
Disapproved
2.7
Transaction not
completed
2.8
Destroy
Omega
3.1
New
Insert card
3.2
Getting information
Enter pin
3.3
Validate
Validating
3.4
Approved
Enter amount
3.5
Validate
3.6
Debit amount
3.7
Collect cash
3.8
Print receipt
Collect receipt
Printing
3.9
Transaction completed
Collect card
Ejecting card
3.10
Destroy
Event
Expected output
Test condition
Action
State
Insert card
Invalid pin
Collect card
Ejecting card
Balance<amount
Collect card
Ejecting card
Validating balance
Balance>=amount
Balance=balanceamount
Money dispensed from
slot
Omega
412
Software Testing
effort. Similarly, classes also cannot be tested in isolation. They may also require additional
source code (similar to stubs and drivers) for testing independently.
We should first specify the pre and post conditions for every operation/method of a class.
We may identify requirements for all possible combinations of situations in which a precondition can hold and post-conditions can be achieved. We may generate test cases to address
what happens when a pre-condition is violated [MCGR01]. We consider the stack class given
in Figure 9.16 and identify the following pre and post conditions of all the methods in the
class:
(i)
Stack::Stack()
(a) Pre=true
(b) Post: top=0
(ii) Stack::push(x)
(a) Pre: top<MAX
(b) Post: top=top+1
(iii) Stack::pop()
(a) Pre: top>0
(b) Post: top=top-1
After the identification of pre and post conditions, we may establish logical relationships
between pre and post conditions. Every logical relationship may generate a test case. We
consider the push() operation and establish the following logical relationships:
1. (pre condition: top<MAX; post condition: top=top+1)
2. (pre condition: not (top<MAX) ; post condition: exception)
Similarly for pop() operation, the following logical relationships are established:
3. (pre condition: top>0; post condition: top=top-1)
4. (pre condition: not (top>0) ; post condition: exception)
We may identify test cases for every operation/method using pre and post conditions. We
should generate test cases when a pre-condition is true and false. Both are equally important
to verify the behaviour of a class. We may generate test cases for push(x) and pop() operations
(refer Table 9.11 and Table 9.12).
Table 9.11. Test cases of function push()
Test input
Condition
Expected output
23
top<MAX
34
top=MAX
414
Software Testing
Condition
Expected output
top>0
23
top=0
Example 9.3. Consider the example of withdrawing cash from an ATM machine given in
example 9.2. Generate test cases using class testing.
Solution:
The class ATMWithdrawal is given in Figure 9.17.
ATMWithdrawal
accountID: integer
amount: integer
ATMWithdrawal (accid,
amt)
Withdraw()
The pre and post conditions of function Withdraw() are given as:
ATMWirthdrawal::Withdraw()
Pre: true
Post: if(PIN is valid) then
if (balance>=amount) then
balance=balance-amount
else
Display Insufcient balance
else
Display Invalid PIN
(true, PIN is valid and balance>=amount)
(true, PIN is valid and balance<amount)
(true, PIN is invalid)
AccountID
Amount
Expected output
1.
4321
1000
2.
4321
2000
3.
4322
Invalid PIN
416
Software Testing
EXERCISES
9.1
9.2
9.3
9.4
9.5
9.6
9.7
9.8
9.9
(a) What is object orientation? How is it close to real life situations? Explain basic
concepts which help us to model a real life situation.
(b) Describe the following terms:
Messages, Methods, Responsibility, Abstraction
(a) How is object oriented testing different from procedural testing?
(b) Discuss the following terms:
(i) Object
(ii) Class
(iii) Message
(iv) Inheritance
(v) Polymorphism
(a) Explain the issues in object oriented testing.
(b) What are various levels of testing? Which testing level is easy to test and why?
Explain the testing process for object oriented programs.
Write the limitations of the basic state model. How are they overcome in state charts?
What is state based testing? Draw the state machine model for a traffic light controller.
What are the limitations of a basic state model? How are they overcome in a state
chart?
What is an activity diagram? What are the basic symbols used in the construction of
such diagram? Explain with the help of an example.
How can we calculate cyclomatic complexity from an activity diagram? What does it
signify? What is the relationship of cyclomatic complexity with number of test
cases?
Write a program for finding the roots of a quadratic equation. Draw the activity
diagram and calculate the cyclomatic complexity. Generate the test cases on the basis
of cyclomatic complexity.
418
Software Testing
9.10 What is path testing? How can we perform it in object oriented software? Explain the
various steps of path testing.
9.11 What is a state chart diagram? Discuss the components of a state chart diagram.
Explain with the help of an example.
9.12 Draw the state chart diagram of a queue. Identify the operations of a queue and
generate the state transition table. Write test cases from the state transition table.
9.13 What is class testing? What are various issues related to class testing?
9.14 Define a class queue. Identify pre and post conditions of various operations and
generate the test cases.
9.15 Write short notes on:
(a) Class hierarchy
(b) Inheritance and Polymorphism
(c) Encapsulation
FURTHER READING
The basic concepts of object-oriented programming in a language independent manner
are presented in:
T. Budd, An Introduction to Object-Oriented Programming, Pearson Education,
India, 1997.
The definitions of object, class, inheritance and aggregation may be read from:
Edward V. Berard, Basic Object-Oriented Concepts, http://www.ipipan.gda.
pl/~marek/objects/TOA/oobasics/oobasics.html
Booch and his colleagues provide an excellent tutorial on nine diagrams of Unified
Modeling Language:
G. Booch, J. Rumbaugh and I.V. Jacobson., The Unified Modeling Language
User Guide, Addison-Wesley, Boston, 1999.
A useful guide to designing test cases for object-oriented applications. This book
provides comprehensive and detailed coverage of techniques to develop testable
models from unified Modeling Language and state machines:
R.V. Binder, Testing Object Oriented Systems: Models, Patterns and Tools,
Addison-Wesley, 1999.
Binder has significantly contributed in the area on object-oriented testing:
R. Binder, State-based Testing, Object Magazine, vol. 5, no.4, pp. 7578, JulyAug, 1995.
R. Binder, State-based testing: Sneak paths and Conditional Transitions,
Object magazine, vol. 5, no. 6, pp. 8789, Nov-Dec 1995.
R. Binder, Testing Object-Oriented Systems: A Status Report, Released online
by RBSC Corporation, 1995. Available at http://stsc.hill.af.mil/crosstalk/1995/
April/testinoo.asp.
McGregor and Sykes provide good differences of testing traditional and objectoriented software. They also describe methods for testing of classes:
J.D. McGregor and David A. Sykes, A Practical Guide to Testing Object
Oriented Software, Addison Wesley, 2001.
10
Metrics and Models in Software Testing
How do we measure the progress of testing? When do we release the software? Why do we
devote more time and resources for testing a particular module? What is the reliability of
software at the time of release? Who is responsible for the selection of a poor test suite? How
many faults do we expect during testing? How much time and resources are required for
software testing? How do we know the effectiveness of a test suite? We may keep on framing
such questions without much effort. However, finding answers to such questions is not easy
and may require a significant amount of effort. Software testing metrics may help us to
measure and quantify many things which may help us find some answers to such important
questions.
421
measure. The metric is a quantitative measure of the degree to which a product or process
possesses a given attribute. For example, a measure is the number of failures experienced
during testing. Measurement is the way of recording such failures. A software metric may be
an average number of failures experienced per hour during testing.
Fenton [FENT04] has defined measurement as:
It is the process by which numbers or symbols are assigned to attributes of
entities in the real world in such a way as to describe them according to
clearly defined rules.
The basic issue is that we want to measure every attribute of an entity. We should have
established metrics to do so. However, we are in the process of developing metrics for many
attributes of various entities used in software engineering.
Software metrics can be defined as [GOOD93]: The continuous application of measurement
based techniques to the software development process and its products to supply meaningful
and timely management information, together with the use of those techniques to improve that
process and its products.
Many things are covered in this definition. Software metrics are related to measures which,
in turn, involve numbers for quantification. These numbers are used to produce a better product
and improve its related process. We may like to measure quality attributes such as testability,
complexity, reliability, maintainability, efficiency, portability, enhanceability, usability, etc. for
a software. Similarly, we may also like to measure size, effort, development time and resources
for a software.
10.1.2 Applications
Software metrics are applicable in all phases of the software development life cycle. In the
software requirements and analysis phase, where output is the SRS document, we may have to
estimate the cost, manpower requirement and development time for the software. The customer
may like to know the cost of the software and development time before signing the contract.
As we all know, the SRS document acts as a contract between customer and developer. The
readability and effectiveness of the SRS document may help to increase the confidence level
of the customer and may provide better foundations for designing the product. Some metrics
are available for cost and size estimation like COCOMO, Putnam resource allocation model,
function point estimation model, etc. Some metrics are also available for the SRS document
like the number of mistakes found during verification, change request frequency, readability,
etc. In the design phase, we may like to measure stability of a design, coupling amongst
modules, cohesion of a module, etc. We may also like to measure the amount of data input to
a software, processed by the software and also produced by the software. A count of the
amount of data input to, processed in, and output from the software is called a data structure
metric. Many such metrics are available like number of variables, number of operators, number
of operands, number of live variables, variable spans, module weakness, etc. Some information
flow metrics are also popular like FAN IN, FAN OUT, etc.
Use cases may also be used to design metrics like counting actors, counting use cases,
counting the number of links, etc. Some metrics may also be designed for various applications
of websites like number of static web pages, number of dynamic web pages, number of internal
422
Software Testing
page links, word count, number of static and dynamic content objects, time taken to search a
web page and retrieve the desired information, similarity of web pages, etc. Software metrics
have a number of applications during the implementation phase and after the completion of
such a phase. Halstead software size measures are applicable after coding like token count,
program length, program volume, program level, difficulty, estimation of time and effort,
language level, etc. Some complexity measures are also popular like cyclomatic complexity,
knot count, feature count, etc. Software metrics have found a good number of applications
during testing. One area is the reliability estimation where popular models are Musas basic
execution time model and Logarithmic Poisson execution time model. Jelinski Moranda model
[JELI72] is also used for the calculation of reliability. Source code coverage metrics are
available that calculate the percentage of source code covered during testing. Test suite
effectiveness may also be measured. The number of failures experienced per unit of time,
number of paths, number of independent paths, number of du paths, percentage of statement
coverage and percentage of branch condition covered are also useful software metrics. The
maintenance phase may have many metrics like number of faults reported per year, number of
requests for changes per year, percentage of source code modified per year, percentage of
obsolete source code per year, etc.
We may find a number of applications of software metrics in every phase of the software
development life cycle. They provide meaningful and timely information which may help us
to take corrective actions as and when required. Effective implementation of metrics may
improve the quality of software and may help us to deliver the software in time and within
budget.
423
With these basic metrics, we may find some additional metrics as given below:
Actual time spent
1000
Estimated testing time
(i)
% of time spent
(ii)
(iii)
(iv)
(v)
We may design similar metrics to find the indications about the quality of the product.
These metrics, although simple, may help us to know the progress of testing and may
provide meaningful information to the testers and the project manager.
An effective test plan may force us to capture data and convert it into useful metrics for
both, process and product. This document also guides the organization for future projects and
may also suggest changes in the existing processes in order to produce a good quality
maintainable software product.
424
Software Testing
There are several metrics available in the literature to capture the quality of design and source
code.
Source
[CHID94]
[LI93]
Message Passing Coupling
(MPC)
in a class.
[CHID94]
coupling (ICP)
Fan-in
Fan-out
[LEE95]
[BINK98]
425
among the methods within a class. In most of the situations, highly cohesive classes are easy
to test.
Table 10.2. Cohesion metrics
Metric
Lack of Cohesion of Methods
(LCOM)
Sources
It measures the dissimilarity of methods in
a class by looking at the instance variable or
attributes used by methods. Consider a class
C1 with n methods M1, M2., Mn. Let (Ij) = set
of all instance variables used by method Mi.
There are n such sets {I1},.{In}. Let
P = {(Ii ,I j ) | Ii
I j = 0} and Q = {((Ii , I j ) | Ii
Ij
[CHID94]
0} .
|P|-|Q|, if |P|
0 otherwise
|Q|
[BEIM95]
[LEE95]
invocations of other methods of the same
class, weighted by the number of parameters
of the invoked method.
Sources
The depth of a class within the inheritance hierarchy is the maximum number of steps from the
class node to the root of the tree and is measured
by the number of ancestor classes.
[CHID94]
(Contd.)
426
Software Testing
(Contd.)
Metric
Sources
The number of classes that a class directly inherits from (i.e. multiple inheritance).
[LORE94]
[TEGA92]
[LORE94]
Sources
It counts the total number of attributes
WMC
Ci
i 1
427
10.4.1 Time
We may measure many things during testing with respect to time and some of them are given
as:
(i)
(ii)
(iii)
(iv)
(v)
(vi)
(vii)
A test case requires some time for its execution. A measurement of this time may help to
estimate the total time required to execute a test suite. This is the simplest metric and may
estimate the testing effort. We may calculate the time available for testing at any point in time
during testing, if we know the total allotted time for testing. Generally a unit of time is seconds,
minutes or hours, per test case. The total testing time may be defined in terms of hours. The
time needed to execute a planned test suite may also be defined in terms of hours.
When we test a software product, we experience failures. These failures may be recorded in
different ways like time of failure, time interval between failures, cumulative failures
experienced up to a given time and failures experienced in a time interval. Consider the Table
10.5 and Table 10.6 where time-based failure specification and failure-based failure
specification are given:
Table 10.5.
S. No. of failure occurrences
1.
12
12
2.
26
14
3.
35
09
4.
38
03
5.
50
12
(Contd.)
428
Software Testing
(Contd.)
S. No. of failure occurrences
6.
70
20
7.
106
36
8.
125
19
9.
155
30
10.
200
45
Table 10.6.
Time in minutes
Cumulative failures
20
01
01
40
04
03
60
05
01
80
06
01
100
06
00
120
07
01
140
08
01
160
09
01
180
09
00
200
10
01
These two tables give us an idea about the failure pattern and may help us to define the
following:
(i)
(ii)
(iii)
(iv)
429
The weight for each defect is defined on the basis of defect severity and removal cost. A
severity rate is assigned to each defect by testers based on how important or serious the defect
is. A lower value of this metric indicates a lower number of errors detected or a lesser number
of serious errors detected.
We may also calculate the number of defects per execution test case. This may also be used
as an indicator of source code quality as the source code progresses through the series of test
activities [STEP03].
100
Higher the value of this metric, higher the confidence about the effectiveness of a test suite.
We should write additional test cases to cover the uncovered portions of the source code.
Where
Failed test case: A test case that when executed, produces an undesired output.
Passed test case: A test case that when executed, produces a desired output.
Higher the value of this metric, higher the efficiency and effectiveness of the test cases
because it indicates that they are able to detect a higher number of defects.
430
Software Testing
It is expected that the reliability of a program increases due to fault detection and correction
over time and hence the failure intensity decreases accordingly.
(i) Basic Execution Time Model
This is one of the popular models of software reliability assessment and was developed
by J.D. MUSA [MUSA79] in 1979. As the name indicates, it is based on execution time
( ). The basic assumption is that failures may occur according to a Non-Homogeneous
Poisson Process (NHPP) during testing. Many examples may be given for real world
events where poisson processes are used. A few examples are given as:
(i) Number of users using a website in a given period of time
(ii) Number of persons requesting for railway tickets in a given period of time
(iii) Number of e-mails expected in a given period of time
431
The failures during testing represent a non-homogeneous process and the failure intensity
decreases as a function of time. J.D. Musa assumed that the decrease in failure intensity as a
function of the number of failures observed is constant and is given as:
( ) =
Where
Vo
Musa [MUSA79] has also given the relationship between failure intensity ( ) and the mean
failures experienced () and is given in Figure 10.1.
Figure 10.1.
If we take the first derivative of the equation given above, we get the slope of the failure
intensity as given below:
d
d
Vo
432
Software Testing
The negative sign shows that there is a negative slope indicating a decrementing trend in
failure intensity.
This model also assumes a uniform failure pattern meaning thereby equal probability of
failures due to various faults. The relationship between execution time ( ) and mean failures
experienced ( ) is given in Figure 10.2.
The derivation of the relationship of Figure 10.2 may be obtained as:
d ( )
( )
= o 1
d
Vo
o
( ) = Vo 1 exp
Vo
The failure intensity as a function of time is given in Figure 10.3.
and
This relationship is useful for calculating the present failure intensity at any given value of
execution time. We may find this relationship as:
( ) =
exp
Vo
Two additional equations are given to calculate additional failures required to be experienced
( ) to reach a failure intensity objective ( F) and additional time required to reach the
objective ( ). These equations are given as:
=
Vo
Vo
o
ln
P
F
Where
: Expected number of additional failures to be experienced to reach failure
intensity objective.
: Additional time required to reach the failure intensity objective.
433
and
are very interesting metrics to know the additional time and additional failures
required to achieve a failure intensity objective.
Example 10.1 A program will experience 100 failures in infinite time. It has now experienced
50 failures. The initial failure intensity is 10 failures/hour. Use the basic execution time model
for the following:
(i) Find the present failure intensity.
(ii) Calculate the decrement of failure intensity per failure.
(iii) Determine the failure experienced and failure intensity after 10 and 50 hours of
execution.
(iv) Find the additional failures and additional execution time needed to reach the failure
intensity objective of 2 failures/hour.
Solution:
(a)
The present failure intensity can be calculated using the following equation:
( )=
VO
VO = 100 failures
= 50 failures
O
= 10 failures/hour
Hence ( ) = 10 1
50
100
= 5 failures/hour
(b)
Decrement of failure intensity per failure can be calculated using the following:
d
d
VO
10
0.1 / hour
100
Failures experienced and failure intensity after 10 and 50 hours of execution can be
calculated as:
=
(c)
( )=
(i)
exp
Vo
434
Software Testing
exp
VO
10 50
100
= 10 exp ( 5)
= 10 exp
= 0.067 failures/hour
(ii)
( ) = 100 1 exp
exp
VO
10 10
100
= 10 exp
(c)
= 10 exp 1
= 3.68 failures/hour
and
with failure intensity objective of 2 failures/hour:
=
Vo
100
5 2
10
V
= o ln P
30 failures
100
5
=
ln
9.16 hours
10
2
(iv) Logarithmic Poisson Execution time model
With a slight modification in the failure intensity function, Musa presented a logarithmic
poisson execution time model. The failure intensity function is given as:
)
= o exp(
Where
: Failure intensity decay parameter which represents the relative change of failure intensity
per failure experienced.
435
exp(
d
=
d
The expected number of failures for this model is always infinite at infinite time. The
relation for mean failures experienced is given as:
1
( )=
ln o
1
The expression for failure intensity with respect to time is given as:
o
( )=
( o
1)
The relationship for an additional number of failures and additional execution time are
given as:
1
ln P
=
F
When execution time is more, the logarithmic poisson model may give larger values of
failure intensity than the basic model.
Example 10.2: The initial failure intensity of a program is 10 failures/hour. The program has
experienced 50 failures. The failure intensity decay parameter is 0.01/failure. Use the
logarithmic poisson execution time model for the following:
(a)
(b)
(c)
(d)
Solution:
(a)
exp(
= 50 failures
= 50 failures
= 0.01/failures
436
Software Testing
Hence
(b)
(c)
= 10 exp
= 6.06 failures/hour
Decrement of failure intensity per failure can be calculated as:
d
d =
= 0.01 6.06
= 0.06
Failure experienced and failure intensity after 10 and 50 hours of execution can be
calculated as:
=
ln
(ii)
(d)
o
o
(i)
50 0.01
1
After 10 hours of execution:
1
=
ln 10 0.01 10 1
0.01
1
=
ln( 2) 69 failures
0.01
10
10
=
5 failures/hour
10 0.01 10 1
2
After 50 hours of execution:
1
=
ln 10 0.01 50 1
0.01
1
=
ln(6 ) 179 failures
0.01
10
10
=
1.66 failures/hour
10 0.01 50 1
6
o
and
=
ln
=
=
=
1
6.06
ln
0.01
2
1
1 1
0.01 2
1
6.06
110 failures
33.5 hours
437
(N
i 1)
: Constant of proportionality
N : total number of errors present
i : number of errors found by time interval ti.
This model assumes that all failures have the same failure rate. It means that the failure rate
is a step function and there will be an improvement in reliability after fixing an error. Hence,
every failure contributes equally to the overall reliability. Here, failure intensity is directly
proportional to the number of errors remaining in a software.
Once we know the value of failure intensity function using any reliability model, we may
calculate reliability using the equation given below:
R(t) = e t
Where is the failure intensity and t is the operating time. Lower the failure intensity,
higher is the reliability and vice versa.
Example 10.3: A program may experience 200 failures in infinite time of testing. It has
experienced 100 failures. Use Jelinski-Moranda model to calculate failure intensity after the
experience of 150 failures.
Solution:
Total expected number of failures (N) = 200
Failures experienced (i) =100
Constant of proportionality ( ) = 0.02
We know
(t) =
(N
i 1)
438
Software Testing
To achieve help for planning and executing testing by focusing resources on the fault-prone
parts of the design and source code, the model used to predict faulty classes should be used.
The fault prediction model can also be used to identify classes that are prone to have severe
faults. One can use this model with respect to high severity of faults to focus the testing on
those parts of the system that are likely to cause serious failures. In this section, we describe
models used to find relationship between object oriented metrics and fault proneness, and how
such models can be of great help in planning and executing testing activities [MALH09,
SING10].
In order to perform the analysis, we used public domain KC1 NASA data set [NASA04].
The data set is available on www.mdp.ivv.nasa.gov. The 145 classes in this data were developed
using C++ language.
The goal of our analysis is to explore empirically the relationship between object oriented
metrics and fault proneness at the class level. Therefore, fault proneness is the binary dependent
variable and object oriented metrics (namely WMC, CBO, RFC, LCOM, DIT, NOC and
SLOC) are the independent variables. Fault proneness is defined as the probability of fault
detection in a class. We first associated defects with each class according to their severities.
The value of severity quantifies the impact of the defect on the overall environment with 1
being most severe to 5 being least severe. Faults with severity rating 1 were classified as high
severity faults. Faults with severity rating 2 were classified as medium severity faults and
faults with severity rating 3, 4 and 5 as low severity faults as at severity rating 4 no class was
found to be faulty and at severity rating 5, only one class was faulty. Table 10.7 summarizes
the distribution of faults and faulty classes at high, medium and low severity levels in the KC1
NASA data set after pre-processing of faults in the data set.
Table 10.7. Distribution of faults and faulty classes at high, medium and low severity levels
Level of severity Number of faulty
classes
High
23
% of faulty
classes
15.56
Number of faults
48
% of Distribution of
faults
7.47
Medium
58
40.00
449
69.93
Low
39
26.90
145
22.59
The min, max, mean, median, std dev, 25% quartile and 75% quartile for all
metrics in the analysis are shown in Table 10.8.
Table 10.8. Descriptive statistics for metrics
Metric
Min.
Max.
Mean
Median
Std. Dev.
Percentile (25%)
Percentile (75%)
CBO
24
8.32
6.38
14
LCOM
100
68.72
84
36.89
56.5
96
NOC
0.21
0.7
RFC
222
34.38
28
36.2
10
44.5
WMC
100
17.42
12
17.45
22
LOC
2313
211.25
108
345.55
235.5
DIT
1.26
1.5
439
The low values of DIT and NOC indicate that inheritance is not much used in the system.
The LCOM metric has high values. Table 10.9 shows the correlation among metrics, which is
an important static quantity.
Table 10.9. Correlations among metrics
Metric
CBO
LCOM
NOC
RFC
WMC
LOC
CBO
LCOM
0.256
NOC
0.03
0.028
RFC
0.386
0.334
0.049
WMC
0.245
0.318
0.035
0.628
LOC
0.572
0.238
0.039
0.508
0.624
DIT
0.4692
0.256
0.031
0.654
0.136
0.345
DIT
The correlation coefficients shown in bold are significant at 0.01 level. WMC, LOC, DIT
metrics are correlated with RFC metric. Similarly, the WMC and CBO metrics are correlated
with LOC metric. Therefore, it shows that these metrics are not totally independent and
represent redundant information.
The next step of our analysis found the combined effect of object oriented metrics on fault
proneness of class at various severity levels. We obtained from four multivariate fault
prediction models using LR method. The first one is for high severity faults, the second one is
for medium severity faults, the third one is for low severity faults and the fourth one is for
ungraded severity faults.
We used multivariate logistic regression approach in our analysis. In a multivariate logistic
regression model, the coefficient and the significance level of an independent variable represent
the net effect of that variable on the dependent variable in our case fault proneness. Tables
10.10, 10.11, 10.12 and 10.13 provide the coefficient (B), standard error (SE), statistical
significance (sig), odds ratio (exp(B)) for metrics included in the model.
Two metrics CBO and SLOC were included in the multivariate model for predicting high
severity faults. CBO, LCOM, NOC, SLOC metrics were included in the multivariate model for
predicting medium severity faults. Four metrics CBO, WMC, RFC and SLOC were included
in the model predicted with respect to low severity faults. Similarly, CBO, LCOM, NOC, RFC,
SLOC metrics were included in the ungraded severity model.
Table 10.10. High severity faults model statistics
Metric
S.E.
Sig.
Exp(B)
CBO
0.102
0.033
0.002
1.107
SLOC
0.001
0.001
0.007
1.001
Constant
2.541
0.402
0.000
0.079
440
Software Testing
S.E.
Sig.
Exp(B)
CBO
0.190
0.038
0.0001
1.209
LCOM
0.011
0.004
0.009
0.989
NOC
1.070
0.320
0.001
0.343
SLOC
0.004
0.002
0.006
1.004
Constant
0.307
0.340
0.367
0.736
S.E.
Sig.
Exp(B)
CBO
0.167
0.041
0.001
1.137
RFC
0.034
0.010
0.001
0.971
WMC
0.047
0.018
0.028
1.039
SLOC
0.003
0.001
0.001
1.003
Constant
1.447
0.371
0.005
0.354
S.E.
Sig.
Exp(B)
CBO
0.195
0.040
0.0001
1.216
LCOM
0.010
0.004
0.007
0.990
NOC
0.749
0.199
0.0001
0.473
RFC
0.016
0.006
0.006
0.984
SLOC
0.007
0.002
0.0001
1.007
Constant
0.134
0.326
0.680
1.144
To validate our findings we performed a 10-cross validation of all the models. For the
10-cross validation, the classes were randomly divided into 10 parts of approximately equal
size (9 partitions of 15 data points each and one partition of 10 data points ).
The performance of binary prediction models is typically evaluated using confusion matrix
(see Table 10.14). In order to validate the findings of our analysis, we used the commonly used
evaluation measures sensitivity, specificity, completeness, precision and ROC analysis.
Table 10.14. Confusion matrix
Observed
Predicted
1.00 (Fault-Prone)
1.00 (Fault-Prone)
441
Precision
It is defined as the ratio of number of classes correctly predicted to the total number of
classes.
TFP TNFP
Precision =
TFP FNFP FFP TNFP
Sensitivity
It is defined as the ratio of the number of classes correctly predicted as fault prone to the total
number of classes that are actually fault prone.
Recall =
TFP
TFP FNFP
Completeness
Completeness is defined as the number of faults in classes classified fault-prone, divided by
the total number of faults in the system.
Model I
Model II
Model III
Model IV
0.25
64.60
66.40
66.21
59.81
0.686
0.044
0.77
70.70
66.70
68.29
74.14
0.754
0.041
0.49
61.50
64.20
63.48
71.96
0.664
0.053
0.37
69.50
68.60
68.96
79.59
0.753
0.039
442
Software Testing
The ROC curve for the LR model with respect to the high, medium, low and ungraded
severity of faults is shown in Figure 10.4.
Figure 10.4. ROC curve for (a) Model I (b) Model II (c) Model III (d) Model IV using LR method
Based on the findings from this analysis, we can use the SLOC and CBO metrics in earlier
phases of the software development to measure the quality of the systems and predict which
classes with higher severity need extra attention. This can help the management focus resources
on those classes that are likely to cause serious failures. Also, if required, developers can
reconsider design and thus take corrective actions. The models predicted in the previous
section could be of great help for planning and executing testing activities. For example, if one
has the resources available to inspect 26 percent of the code, one should test 26 percent classes
predicted with more severe faults. If these classes are selected for testing, one can expect
maximum severe faults to be covered.
443
development life cycle? The estimation may help us to calculate the cost of software
maintenance, which a customer may like to know as early as possible in order to plan the
costing of the project.
Maintenance effort is defined as the number of lines of source code added or changed during
the maintenance phase. A model has been used to predict maintenance effort using Artificial
Neural Network (ANN) [AGGA06, MALH09]. This is a simple model and predictions are
quite realistic. In this model, maintenance effort is used as a dependent variable. The
independent variables are eight object oriented metrics namely WMC, CBO, RFC, LCOM,
DIT, NOC, DAC and NOM. The model is trained and tested on two commercial software
products User Interface Management System (UIMS) and Quality Evaluation System
(QUES), which are presented in [LI93]. The UIMS system consists of 39 classes and the
QUES system consists of 71 classes.
The ANN network used in model prediction belongs to Multilayer Feed Forward networks
and is referred to as M-H-Q network with M source nodes, H nodes in hidden layer and Q
nodes in the output layer. The input nodes are connected to every node of the hidden layer but
are not directly connected to the output node. Thus, the network does not have any lateral or
shortcut connection.
Artificial Neural Network (ANN) repetitively adjusts different weights so that the difference
between the desired output from the network and actual output from ANN is minimized. The
network learns by finding a vector of connection weights that minimizes the sum of squared
errors on the training data set. The summary of ANN used in the model for predicting
maintenance effort is shown in Table 10.16.
Table 10.16. ANN summary
Architecture
Layers
Input Units
Hidden Units
Output Units
Training
Transfer Function
Tansig
Algorithm
Back Propagation
Training Function
TrainBR
The ANN was trained by the standard error back propagation algorithm at a learning rate of
0.005, having the minimum square error as the training stopping criterion.
The main measure used for evaluating model performance is the Mean Absolute Relative
Error (MARE). MARE is the preferred error measure for software measurement researchers
and is calculated as follows [FINN96]:
n
MARE =
i 1
estimate actual
actual
444
Software Testing
Where:
- estimate is the predicted output by the network for each observation
- n is the number of observations
To establish whether models are biased and tend to over or under estimate, the Mean
Relative Error (MRE) is calculated as follows [FINN96]:
n
MRE =
i
estimate actual
actual
1
v min A
max A min A
(ii)
P1
P2
P3
Eigenvalue
3.74
1.41
1.14
Variance %
46.76
17.64
14.30
(Contd.)
445
(Contd.)
P.C.
P1
P2
P3
Cumulative %
46.76
64.40
78.71
DAC
0.796
0.016
0.065
DIT
0.016
0.220
0.85
LCOM
0.820
0.057
0.079
MPC
0.094
0.937
0.017
NOC
0.093
0.445
0.714
NOM
0.967
0.017
0.049
RFC
0.815
0.509
0.003
WMC
0.802
0.206
0.184
We employed the ANN technique to predict the maintenance effort of the classes. The inputs
to the network were all the domain metrics P1, P2 and P3. The network was trained using the
back propagation algorithm. Table 10.16 shows the best architecture, which was experimentally
determined. The model is trained using training and test data sets and evaluated on validation
data set. Table 10.18 shows the MARE, MRE, r and p-value results of the ANN model
evaluated on validation data. The correlation of the predicted change and the observed change
is represented by the coefficient of correlation (r). The significant level of a validation is
indicated by a p-value. A commonly accepted p-value is 0.05.
Table 10.18. Validation results of ANN model
MARE
0.265
MRE
0.09
0.582
p-value
0.004
Percent
0-10%
50
11-27%
9.09
28-43%
18.18
>44%
22.72
For validation data set, the percentage error smaller than 10 per cent, 27 per cent and 55 per
cent is shown in Table 10.19. We conclude that the impact of prediction is valid in the
population. Figure 10.5 plots the predicted number of lines added or changed versus the actual
number of lines added or changed.
446
Software Testing
Figure 10.5. Comparison between actual and predicted values for maintenance effort
Software testing metrics are one part of metrics studies that focus on the testing issues of
processes and products. Test suite effectiveness, source code coverage, defect density and
review efficiency are some of the popular testing metrics. Testing efficiency may also be
calculated using size of software tested/resources used. We should also have metrics to provide
immediate, real time feedback to testers and project manager on quality of testing during each
test phase rather waiting until the release of the software.
There are many schools of thought about the usefulness and applications of software
metrics. However, every school of thought accepts the old quote of software engineering i.e.
You cannot improve what you cannot measure; and you cannot control what you cannot
measure. In order to control and improve various activities, we should have something to
measure such activities. This something differs from one school of thought to another school
of thought. Despite different views, most of us feel that software metrics help to improve
productivity and quality. Software process metrics are widely used in various standards and
models such as Capability Maturity Model for Software (CMM-SW) and ISO9001. Every
organization is putting serious efforts to implement these metrics.
447
448
Software Testing
449
EXERCISES
10.1 What is software metric? Why do we need metrics in software? Discuss the areas of
applications and problems during implementation of metrics.
10.2 Define the following terms:
(a) Measure
(b) Measurement
(c) Metrics
10.3 (a) What should we measure during testing?
(b) Discuss things which can be measured with respect to time
(c) Explain any reliability model where emphasis is on failures rather than faults.
450
Software Testing
451
FURTHER READING
An in-depth study of 18 different categories of software complexity metrics was
provided by Zuse, where he tried to give the basic definition for metrics in each
category:
H. Zuse, Software Complexity: Measures and Methods, Walter De Gryter,
Berlin 1990.
Fentons book is a classic and useful reference, and it gives a detailed discussion on
measurement and key definition of metrics:
N. Fenton and S. Pfleeger, Software Metrics: A rigorous & Practical Approach,
PWS Publishing Company, Boston, 1997.
A detailed description on software reliability and contributions from many of the
leading researchers may be found in Lyus book:
M. Lyu, Handbook of Software Reliability Engineering, IEEE Computer
Press, 1996.
Aggarwal presents a good overview of software reliability models. Musa provides a
detailed description particularly on software reliability models:
J.D. Musa, A. Lannino, and K. Okumoto Software Reliability: Measurement,
Prediction and Applications, Mc Graw Hill, New York, 1987.
K.K. Aggarwal, Reliability Engineering, Kluwer, New Delhi, India, 1993.
The first significant OO design metrics suite was proposed by Chidamber and Kemerer
in 1991. Then came another paper by Chidamber and Kemerer defining and validating
metrics suite for OO design in 1994. This metrics suite has received the widest
attention in empirical studies:
S. Chidamber, and C. Kamerer, A Metrics Suite for Object-Oriented Design,
IEEE Transactions on Software Engineering, vol. 20, no. 6, pp. 476493, 1994.
More detailed accounts on our fault prediction models at various severity levels of
faults can be found in:
Y. Singh, A. Kaur, R. Malhotra, Empirical Validation of Object-Oriented
Metrics for Predicting Fault Proneness Models, Software Quality Journal, vol.
18, no. 1, pp.335, 2010.
There are several books on research methodology and statistics with their
applications:
C.R. Kothari, Research Methodology: Methods and Techniques, New Delhi,
New Age International Limited, 2004.
W.G. Hopkins, A New View of Statistics, Sport science, 2003.
For a detailed account of the statistics needed for model prediction using logistic
regression (notably how to compute maximum likelihood estimates, R2, significance
values) see the following text book and research paper:
D. Hosmer, and S. Lemeshow, Applied Logistic Regression, New York: John
Wiley & Sons, 1989.
V. Basili, L. Briand, and W. Melo, A Validation of Object-Oriented Design
Metrics as Quality Indicators, IEEE Transactions on Software Engineering, vol.
22, no. 10, pp. 751761, 1996.
452
Software Testing
Khoshgaftaar et al. introduced the use of the Artificial Neural Networks as a tool for
predicting software quality. They presented a large telecommunication system,
classifying modules as fault prone or not fault prone:
T.M. Khoshgaftaar, E.D. Allen, J.P. Hudepohl, S.J. Aud, Application of Neural
Networks to Software Quality Modeling of a Very Large Telecommunications
System, IEEE Transactions on Neural Networks, vol. 8, no. 4, pp. 902909,
July 1997.
For full details on ROC analysis and cross validation methods see the publications:
J. Hanley, BJ. McNeil, The Meaning and Use of the Area under a Receiver
Operating Characteristic ROC Curve, Radiology, vol. 143, pp. 2936, April
1982.
M. Stone, Cross-Validatory Choice and Assessment of Statistical Predictions,
J. Royal Stat. Soc., vol. 36, pp. 111147, 1974.
A workshop on Empirical Studies of Software Development and Evolution (ESSDE)
was conducted in Los Angeles in May 1999. The authors (well-known and leading
researchers) present in this workshop summarized the state of art and future directions
regarding empirical studies in object oriented systems:
L. Briand, E. Arisholm, S. Counsell, F. Houdek, P. Thvenod-Fosse, Empirical
Studies of Object-Oriented Artifacts, Methods, and Processes: State of the Art
and Future Directions, Empirical Software Engineering: An International
Journal, 2000.
Briand and Wst performed a critical review of existing work on quality models in
object-oriented Systems:
L. Briand and J. Wst, Empirical Studies of Quality Models in Object-Oriented
Systems, Advances in Computers, 2002.
A recent paper examining the confounding effect of the relationship between Object
Oriented metrics and change proneness is published in IEEE Transactions on Software
Engineering:
Y. Zhou, H. Leung and B. Xu, Examining the Potentially Confounding Effect
of Class Size on the Associations between Object-Oriented Metrics and ChangeProneness, IEEE Transactions on Software Engineering, vol. 35, no. 5,
pp. 607623, September/October 2009.
11
Testing Web Applications
Web pages and websites have become an integral part of modern civilization. Everyday a new
website for some specific application is hosted and joins the bandwagon of the Internet. We
may visit a website and may find a good number of web pages designed for specific
applications. The quality of a web application must be assured in terms of response time, ease
of use, number of users, ability to handle varied spikes in traffic, provide accurate information,
etc. Compromise in any of these parameters may compel the customers to move on to the
competitors site. Testing these web pages is a real challenge because conventional testing
techniques may not be directly applicable.
454
Software Testing
A web application may consist of multiple web servers and database servers. These
applications need only web browser to be installed on the client machine. The web browser is
a software programme that retrieves, interprets and presents information to the client in the
desirable format. A web application depicts a three-tier architecture shown in Figure 11.2. It
comprises of client machines, web server incorporating business logic and database server for
handing data storage.
In three-tier architecture, the applications are partitioned into three separate layers; hence
changes in one tier do not have any effect on the other tier. The advantages of three-tier
architecture include less disk space required at the client machine, automatic up gradation of
software and robustness. A comparison between client-server application and web application
is given in Table 11.1.
The client application may comprise of many active contents written in Java script,
VBscript, DHTML and other technologies. Web servers use dynamic technology (ASP, JSP,
Perl, CGI, Python) to render the active content. This may invoke incompatibility issues due to
existence of varied browsers.
Client/Server Applications
1.
It is a 2-tier application.
It is a 3-tier application.
2.
Client/Server Applications
3.
It is an expansive activity.
4.
5.
6.
7.
8.
For example, if a student wants to know his/her final result, the following steps may be followed
(see Figure 11.3):
(i) The student requests the client application (with web browser installed).
(ii) The client application sends the request to the web server.
(iii) The web server (incorporating business logic) queries the database server.
(iv) The database server returns the students marks in all the subjects.
(v) The web server performs all calculations to generate the final marks.
(vi) The web server returns the final marks to the client application.
(vii) The client application displays the final marks to the student.
Popularity of the internet has increased the usage of three-tier architecture. Testing such an
application is very important and specialized techniques may be used to ensure the correct
operation. Non-functional requirements like usability, performance, compatibility, security,
etc. need to be tested thoroughly to ensure the expected quality level of any web application.
456
Software Testing
(i) Functionality
(ii) Usability
(iii) Browser compatibility
(iv) Security
(v) Load and stress
(vi) Storage and Database
There are numerous issues that need to be considered that are specific to web application, hence,
only a sub-set of conventional testing techniques are applicable for testing a web application.
In Table 11.2 test cases corresponding to addition, deletion and updation in the online shopping
cart are presented.
Table 11.2. Sample functional test cases of order process of an online shopping web application
Test
Description (steps followed)
Inputs
case id
TC1
1. Search the product gallery to
Search string
decide which items to purchase.
TC2
1. Register on the website to place
an order
password, shipping details
(address, city, state, zip
code) and billing details
(address, city, state, zip
code).
TC3
1. Log into the website.
Login id, password
Expected output
List of items searched are
displayed.
If the information entered
is valid the user is registered successfully, otherwise an appropriate error
message is displayed.
Item is successfully added
in the shopping cart.
TC4
TC5
TC6
TC7
item is successfully
deleted from the shopping
cart.
(Contd.)
458
Software Testing
(Contd.)
Test
Description (steps followed)
case id
TC8
1. Log into the website.
TC9
TC10
Inputs
Expected output
After authentication,
amount is transferred and
items are delivered successfully.
given in Figure 11.4. In Table 11.5, the given user interface testing checklist includes issues on
hyperlinks for testers to ensure their proper functioning.
Table 11.3. Navigation testing test cases for online shopping website
Test
case id
Description
Inputs
Expected output
TC1
Link1=Home
Link2=About us
Link3=Product Gallery
Link4=Contact us
Link5=Registration
Link6=Shopping Cart
Link7=Order Status
Link8=Feedback
TC2
Link1=Home
Link2=About us
Link3=Product Gallery
Link4=Contact us
Link5=Registration
Link6=Shopping Cart
Link7=Order Status
Link8=Feedback
TC3
Search string
TC4
Manual checking of hyperlinks can be very time consuming. There are various online tools
available for checking broken links, accuracy and availability of links and obtaining advice on
search engines. Some tools for navigation testing include Performance Technologies TestLink,
W3Cs Link checker, Xenus LinkSleuth, Dead Links Dead Links, LinkTigers LinkTiger,
Viable Software Alternatives LinkRunner, Elsops LinkScan, REl Softwares Link Validator,
UCIs MQMspider and Illumits WebLight.
460
Software Testing
Description
Inputs
Expected output
TC1
TC3
TC4
Description
Yes/No/NA
Remarks
Hyperlinks
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
Does the link bring the user to the correct web page?
Tables
11.
Are the columns wide enough or the text wraps around the
rows?
12.
13.
14.
15.
Frames
16.
17.
18.
19.
20.
Forms
21.
22.
the correct sequence?
(Contd.)
462
Software Testing
(Contd.)
S. No.
Description
23.
24.
25.
26.
27.
type?
28.
29.
30.
31.
characters?
32.
33.
34.
Can the user select more than one option in radio buttons?
35.
36.
37.
38.
39.
40.
41.
42.
43.
Is all the data inside the list/combo box listed in chronological order?
44.
45.
of values appropriately (boundary value analysis)?
46.
47.
48.
49.
50.
Yes/No/NA
Remarks
tions.
Learnability
on the website.
Satisfaction
Usability testing refers to the procedure employed to evaluate the degree to which the
software satisfies the specified usability criteria. A systematic approach to usability testing is
given in Figure 11.6.
464
Software Testing
If all the steps are followed in the given order, we may be able to perform effective and efficient
usability testing.
Profession
16-20
21-30
31-40
41-50
>50
Male
Female
____________________
Education level
Higher secondary
Gender
Graduate
Customer type
Post graduate
Whole sale purchaser
Retail purchaser
Shopping experience
Shopping frequency
We must remember two things when identifying the participants, that is, selection of the
right number and type of participants. The number of participants depends upon the degree of
confidence required, time and resources available, and availability of participants. We must
select an appropriate number of participants from different groups of target users as each group
of users will use the website with a different purpose. For example, consider the online
shopping website, when testing this website, one group of participants may be selected from
`wholesale purchaser category and other from retail purchaser category.
Yes
No
2.
Yes
No
3.
Yes
No
4.
Very less
Less
5.
_____________________________________
High
Very high
_____________________________________
Efficiency
6.
Very
easy
Easy
Difficult
Very difficult
7.
Very
easy
Easy
Difficult
Very difficult
(Contd.)
466
Software Testing
(Contd.)
8.
Very less
Less
High
Very high
9.
Yes
No
10.
_____________________________________
_____________________________________
11.
<3
4 to 6
7-15
>15
12.
Very
slow
Slow
Fast
Very fast
Yes
No
Remarks
Completeness
13.
____________________
14.
Yes
No
Remarks
____________________
15.
Very less
Less
16.
Yes
No
17.
Yes
No
High
Very high
Learnability
18.
Very
easy
Easy
Difficult
Very difficult
19.
Very
easy
Easy
Difficult
Very difficult
20.
Very less
Less
High
Very high
(Contd.)
22.
23.
Very
easy
Easy
Difficult
Very difficult
_____________________________________
Yes
No
Yes
No
25.
Very
easy
Easy
26.
_____________________________________
Difficult
Very difficult
_____________________________________
27.
Yes
No
28.
Very less
Less
High
Very high
29.
Very
easy
Easy
Difficult
Very difficult
30.
Very
useful
Useful
Less
useful
Not useful
31.
Yes
No
32.
Yes
No
33.
Very
useful
Useful
Less
useful
Not useful
(Contd.)
468
Software Testing
(Contd.)
General
34.
_____________________________________
_____________________________________
35.
Very
easy
Easy
Difficult
Very difficult
36.
Very
easy
Easy
Difficult
Very difficult
37.
Very less
Less
High
Very high
38.
Very
easy
Easy
Difficult
Very difficult
39.
_____________________________________
40.
Very
good
Good
Bad
Very bad
The questionnaire may be modified according to the requirement of the application. Figure
11.9 shows an example of questions from a usability test of an online shopping website.
1.
How easily and successfully are you able to register on this website?
2.
3.
How closely did the order process meet with your specifications?
4.
5.
How do you feel about the time taken to complete the order process (in terms of
time and number of steps)?
(i)
(ii)
470
Software Testing
(i)
(ii)
(iii)
(iv)
(v)
IE 6
IE 7
IE 8
Firefox
Navigator Chrome
Opera
Safari
Lynx
Audio
Video
Text
Platform
Win XP
Win 98
Win NT
Win
2000
Mac
Linux
Form
(i)
Configuration and compatibility testing must begin after the functional testing is complete,
otherwise it will be difficult to determine whether the fault is due to functionality or
compatibility.
(ii) Web compatibility matrix must include the most popular browsers that are expected to
access the website.
(iii) Compatibility testing must take into account the target audiences.
(iv) Configuration and compatibility test cases must be executed by different tester groups
in parallel so that the testing covers a large number of platforms and environments.
(v) The records and results of testing may be kept for future use and predictions.
The checklist given in Table 11.8 addresses the issues of configuration and compatibility
testing.
Table 11.8.
S. No.
Description
1.
2.
3.
4.
5.
6.
7.
8.
Yes/No/NA
Remarks
9.
10.
472
Software Testing
A web application must fulfil the above mentioned primary security requirements. Testing
the threats and vulnerabilities in a web application is an important activity. The tester must
check the web application against all known internet threats.
Virus threats are the most sophisticated types of threats to web applications that may enter
from the network. A virus is a program that modifies other programs by attaching itself to the
program, which may infect the other programs when the host program executes. The virus may
perform any function such as deleting files and programs. The goal of testing the web application
against virus threats is to verify the methods of virus prevention, detection and removal. Virus
testing ensures that:
(i)
(ii)
The antivirus software prevents the virus from entering into the system.
The antivirus software efficiently detects an infection, removes all traces of that
infection from the program and restores the program to its original state.
(iii) Periodical updates and scans are conducted to keep the system updated and prevent
new viruses from penetrating into the system.
Computer networks have grown rapidly with the evolution of the internet. Internet
connectivity has become a necessity for many organizations. Although the internet provides an
organization access to the outside world, it also provides the intruders an opportunity to access
the organizations Local Area Network (LAN). If the systems are affected from a security
failure, they must be recovered which may consume lots of effort and resources. Firewalls are
an effective means of protecting a network from intruders. It is placed between the organizations
internal network and the external network (see Figure 11.11). It serves as a security wall
between the outside world and the organizations internal network. The aim of this wall is to
protect the LAN from the internet-based security threats and it serves as a single point where
all security checks are imposed. The firewall may be a single computer or a set of computers
may be combined to serve as a firewall [STALL01].
The idea of firewall testing is to break the system by bypassing the security mechanisms to
gain access to the organizations sensitive information. This enables the tester to check the
effectiveness of the firewall. Firewall testing ensures that the zones of risks are identified
correctly, packet filtering occurs according to the designed rules, penetration within the
boundaries established by the firewall is not possible and events are timely logged to keep
track of an intruder.
Security testing requires an experienced tester having thorough knowledge of internet related
security issues. The security expert needs to check security issues including authentication,
474
Software Testing
unauthorized access, confidentiality, virus, firewalls and recovery from failure. Table 11.9 provides
a comprehensive checklist to test an application for security threats and vulnerabilities.
Table 11.9. Security testing checklist
S. No.
Description
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
Firewall
11.
Yes/No/NA
Remarks
12.
13.
14.
15.
Data Security
16.
Are the data validations tested?
17.
Is sensitive information (such as password, credit card
18.
Are privileges to access data enforced?
19.
20.
Is sensitive and important data kept at secure locations?
Encryption
21.
Are encryption standards enforced?
22.
Is there any procedure followed to identify what is to be
encrypted?
23.
Is sensitive and critical information (such as password,
24.
Description
Does the enforcement of encryption standard affect the
speed of the web page?
Yes/No/NA
Remarks
476
Software Testing
Response time
Memory available
Network bandwidth
Number of users
User type
Time to download
Varied client machine configurations
The goal of performance testing is to evaluate the applications performance with respect to
real world scenarios. The following issues must be addressed during performance testing:
(i) Performance of the system during peak hours (response time, reliability and availability).
(ii) Points at which the system performance degrades or system fails.
(iii) Impact of the degraded performance on the customer loyalty, sales and profits.
The above issues require metrics to be applied to the system under test in order to measure
the system behaviour with respect to various performance parameters.
guideline is to identify the necessary resources in order to establish the lab for conducting load
tests. In addition to identification of resources, the load testing team members must be
determined to solve specific problems in the following areas:
(i)
(ii)
(iii)
(iv)
(v)
(vi)
The configuration of environment for load tests include network configuration, setting up
server, and database set up, etc.
Metric
Description
2.
3.
4.
5.
6.
7.
8.
Cycles
478
Software Testing
(Contd.)
S. No.
Metric
Description
Performance
9.
10.
Response time
11.
Wait time
12.
Throughput
13.
Elapsed time
14.
15.
16.
17.
Connections failed
18.
19.
Failure
Working with the load testing team earlier in the software life cycle is important, as it will
help in identifying metrics and will ensure performance of the web site.
The system performance is expected to degrade when a large number of users hit the web
site simultaneously. After the completion of stress tests, the testing team must analyze the
noted systems performance degradation points and compare them with the acceptable
performance of the system. Risk analysis described in chapter 7 may be used to make decisions
of the acceptable level of performance degradation of the system.
Description
Yes/No/NA
Remarks
Users
1.
the web application?
2.
3.
4.
5.
Response Time
6.
480
Software Testing
(Contd.)
S. No.
Description
7.
8.
Yes/No/NA
Remarks
9.
10.
Database
11.
12.
13.
14.
15.
Tools
16.
17.
18.
19.
General
20.
21.
22.
23.
24.
25.
Table 11.11 presents a generic list of performance testing questions that are common to most
of the web applications. The performance testing checklist may help to uncover the major
performance related problems early in software development process.
Testing data-centric web applications is important to ensure their error-free operation and increased
customer satisfaction. For example, consider the example for purchasing items from an online store.
If the user performs a search based on some keywords and price preferences, a database query is
created by the database server. Suppose due to some programming fault in the query, the query does
not consider the price preferences given by the customer, this will produce erroneous results. These
kinds of faults must be tested and removed during database testing.
Important issues in database testing may include:
(i) Data validation
(ii) Data consistency
(iii) Data integrity
(iv) Concurrency control and recovery
(v) Data manipulation operations such as addition, deletion, updation and retrieval of data.
(vi) Database security
A database must be tested for administrative level operations such as adding, deleting and
updating an item in the database, and user operations such as searching an item from the
database or providing personal details. In the example of the online shopping website, the most
common administrative operations and user operations include:
Administrative operations
(i)
(ii)
(iii)
(iv)
User operations
(i) Searching items from the database
(ii) Registering into the website involves storing the users personal details
(iii) Placing an order involves storing user preferences and purchase details into the
database
(iv) Providing feedback involves storing information in the database
(v) Tracking status of the order placed
In chapter 6, we generated test cases based on administrative operations for testing an
application. Table 11.12 shows sample test cases based on a user operation in an online shopping
website.
Table 11.12. Sample database test cases
Test
case id
Inputs
Expected output
Search string
(Contd.)
482
Software Testing
(Contd.)
Test
case id
Inputs
Expected output
shipping details
(address, city,
state and zip code)
and billing details
(address, city, state
and zip code).
Providing feedback on the website
TC3
Description
Overall Rating
Consistency
1.
Bad
Good
2.
Bad
Good
3.
Bad
Good
4.
Bad
Good
5.
Bad
Good
Flexibility
6.
Bad
Good
7.
Bad
Good
Bad
Good
8.
9.
Bad
Good
10.
Bad
Good
Learnability
11.
Bad
Good
12.
Bad
Good
13.
Bad
Good
14.
Bad
Good
15.
Bad
Good
User Guidance
16.
Bad
Good
17.
Bad
Good
18.
Bad
Good
19.
Bad
Good
20.
Bad
Good
(Contd.)
484
Software Testing
(Contd.)
S. No.
Description
Overall Rating
21.
Bad
Good
Bad
Good
intended audience?
22.
23.
Bad
Good
24.
Bad
Good
25.
Bad
Good
Once the users opinion is obtained, it is important to identify useful fault reporting, suggestions
and recommendations. The following criteria can be used to decide which suggestion needs
attention:
1. Frequency of suggestion: How many users have given the same suggestion or
recommendation? If a small number of users are making the same request, then we must
think twice before implementing the suggestion.
2. Source of feedback: Who is providing the suggestion? It is vital to make sure that
suggestions come from regular users and not accidental users.
3. Cost of implementing the suggestion: Is the suggested idea worth implementing? The
correctness of the proposed change and its impact on the cost and schedule must be
analyzed carefully. The benefits of implementing the suggested idea to the business must
be determined.
4. Impact of implementing the suggestion: Will implementing the suggestion increase
complexity of the website? Will the change be compatible with the other functionalities
of the website? It is important to obtain the answers to these questions as the results of
implementing a change are sometimes unpredictable.
Figure 11.13 shows the typical post deployment testing procedure of a web application.
Obtaining users
feedback
(questionnaire/
survey)
Review frequency
and source of
feedback
Analyze impact
of implementing
change
Estimate cost
of implementing
suggested idea
Not
approved
Begin
Implement and
test the suggested
functionality
Not
approved
Deploy the
functionality
End
This procedure is necessary in order to ensure that the suggested functionalities are properly
addressed, fixed and closed. If the suggested idea is approved after analyzing its cost and
impact, then the suggested functionality is implemented, tested and deployed.
Description
Page Composition
Number of words
Number of links
Links on a page
Redundant links
Embedded links
Wrapped links
Readability
Number of !s
Content percentage
Navigation percentage
Number of graphics
Page size
Image size
Animated elements
486
Software Testing
(Contd.)
Metric
Description
Page Formatting
Font styles
Types of fonts
Text emphasis
Screen coverage
Number of screens
Text clustering
Text in clusters
Text position
Number of lists
List on a page
Number of rules
Number of colours
Line length
Leading
Frames
Use of frames
Number of tables
Image quality
Link quality
Layout quality
Download speed
Web applications differ from each other in various dimensions such as page content, page size,
quality, reliability, screen coverage, etc. The metrics given in Table 11.13 demonstrate quantitative
web usability metrics that can provide useful insights into distinguishing features of web pages.
These metrics give us some ideas about thoroughness and effectiveness of web testing.
488
Software Testing
490
Software Testing
EXERCISES
11.1 What is web testing? Differentiate between client/server applications and web application.
11.2 (a) What are the key areas in testing a web application?
(b) Which conventional testing techniques are applicable in testing a web application?
11.3 What is user interface testing? Explain with the help of an example.
11.4 Consider a web application for registering users in order to create an email account.
The registration form includes the following fields:
(a) User name
(b) Password
(c) Re-type password
(d) First name
(e) Last name
(f) Address
(g) Country
(h) Date of birth
(i) Gender
(j) Security question
(k) Answer to security question
Generate test cases using form based testing.
11.5 Explain the significance of navigation testing. List some commercially available tools
for link testing.
11.6 Consider the web application given in exercise 11.4. Design test cases using formbased testing. Make the necessary assumptions.
492
Software Testing
11.21 What is database testing? Identify administrative and user operations of an online
purchase of a website.
11.22 What aspects must be covered in order to ensure database correctness in database
testing? Explain with the help of an example.
11.23 What is post deployment testing? How are surveys helpful in post deployment testing?
Explain the criteria that must be followed for deciding which suggested idea must be
implemented.
11.24 (a) Identify major risks in testing a web application.
(b) What issues are considered while constructing test cases during user interface testing?
11.25 Explain three-tier architecture of a web application.
11.26 (a) Which metrics must be captured during web usability testing?
(b) Identify web page composition metrics.
11.27 Explain the significance of stress testing in analyzing the performance of a web
application.
11.28 Identify functional and performance test cases of the following web page:
FURTHER READING
Nguyen provides good introduction on testing web applications in the book:
H.Q. Nguyen, Testing Application on the Web, John Wiley and Sons, 2001.
A practical oriented approach is followed for generating test cases for a website in the
book:
L. Tamres, Introduction to Software Testing, Pearson Education, 2005.
12
Automated Test Data Generation
Is it possible to generate test data automatically? Generating test data requires proper
understanding of the SRS document, SDD document and source code of the software. We have
discussed a good number of techniques in the previous chapters for writing test cases manually.
How can we automate the process of writing test cases? What is the effectiveness of such
automatically generated test suite? Is it really beneficial in practice? We may ask such questions
wherever and whenever we discuss about the relevance of automated software test data
generation. As we all know, testing software is a very expensive activity and adds nothing to
the software in terms of functionality. If we are able to automate test data generation, the cost
of testing will be reduced significantly.
Automated test data generation is an activity that generates test data automatically for the
software under test. The quality and effectiveness of testing is heavily dependent on the
generated test data. Hoffman Daniel and others [DANI99] have rightly reported their views
as:
The assurance of software reliability partially depends on testing. However,
it is interesting to note that testing itself also needs to be reliable. Automating
the testing process is a sound engineering approach, which can make the
testing efficient, cost effective and reliable.
However, test data generation is not an easy and straightforward process. Many methods are
available with their proclaimed advantages and limitations, but acceptability of any one of
them is quite limited universally.
cases on the basis of selected techniques and execute them to see the correctness of the
software. How can we automate the process of generation of test cases / test data? The simplest
way is to generate test data randomly, meaning, without considering any internal structure and
/ or functionality of the software. However, this way may not be an appropriate way to generate
test data automatically.
Every statement of the source code should be executed at least once (statement coverage).
Every branch of the source code should be executed at least once (branch coverage).
Every condition should be tested at least once (condition coverage).
Every path of the source code should be executed at least once (path coverage).
Every independent path of the source code should be executed at least once (independent
path coverage).
6. Every stated requirement should be tested at least once.
7. Every possible output of the program should be verified at least once.
8. Every definition use path and definition clear path should be executed at least once.
There may be many such test adequacy criteria. Effectiveness of testing is dependent on the
definition of test adequacy criteria because it sets standards to measure the thoroughness of
testing. Our thrust will only be to achieve the defined standard and thus, the definition of test
adequacy criteria is very important and significant to ensure the correctness of the software.
When our test suite fails to meet the defined criteria, we generate another test suite that does
satisfy the criteria. Many times, it may be difficult to generate a large number of test data
manually to achieve the criteria and automatic test data generation process may be used to
satisfy the defined criteria.
496
Software Testing
which has diverted the desired flow of the program. A function minimization technique may
be used to correct the input variables in order to select and execute the desired path.
We consider a program for determination of the nature of roots of a quadratic equation. The
source code and program graph of the program are given Figure 12.1 and 12.2 respectively. We
select the path (1-7, 13, 25, 28-32) for the purpose of symbolic execution.
#include<stdio.h>
#include<conio.h>
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
void main()
{
int a,b,c,valid=0,d;
clrscr();
printf("Enter values of a, b and c:\n");
scanf("%d\n %d\n %d",&a,&b,&c);
if((a>=0)&&(a<=100)&&(b>=0)&&(b<=100)&&(c>=0)&&(c<=100)){
valid=1;
if(a==0){
valid=-1;
}
}
if(valid==1){
d=b*b-4*a*c;
if(d==0){
printf("Equal roots");
}
else if(d>0){
printf("Real roots");
}
else{
printf("Imaginary roots");
}
}
else if(valid==-1){
printf("Not quadratic");
}
else {
printf("The inputs are out of range");
}
getch();
}
The input variables a, b and c are assigned the constant variables x, y and z respectively. At
statement number 7, we have to select a false branch to transfer control to statement number
13. Hence, the first constraint of the constraint system for this path is:
(i) (x <= 0 or x > 100 or
y <= 0 or y > 100 or
z <= 0 or z > 100)
and (valid = 0)
498
Software Testing
The path needs statement number 13 to become false so that control is transferred to
statement number 25. The second constraint of the constraint system for this path is:
(ii) valid != 1
Finally, the path also needs statement number 25 to become false so that control is
transferred to statement number 28. Hence, the third constraint of the constraint system for this
path is:
(iii) valid != -1
To execute the selected path, all of the above mentioned constraints should be satisfied. One
of the test cases that traverse the path (1-7, 13, 25, 28-32) may include the following inputs:
Constraints
Feasible?
1.
101
50
Yes
2.
No
3.
4.
50
50 Not quadratic
No
Yes
(Contd.)
500
Software Testing
(Contd.)
S. No x
Constraints
Feasible?
5.
99
Equal roots
Yes
6.
50
50
Real roots
7.
50
50
50 Imaginary roots
Yes
Yes
Therefore, in symbolic execution, constraints are identified for every predicate node of the
selected path. We may generate test data for selected paths automatically using identified
constraints.
Christoph. C. Michael [MICH01] and others have discussed some problems of symbolic
execution in practice as:
One such problem arises in infinite loops, where the number of iterations
depends on a non constant expression. To obtain a complete picture of what
the program does, it may be necessary to characterize what happens if the
loop is never entered, if it iterates once, if it iterates twice, and so on.
We may choose a good number of paths by considering various possibilities in a loop. Thus,
it may be a time consuming activity. We may execute the program symbolically for one path
at a time. Paths may be selected by a user or by the software using some selection technique.
In addition to loops, there are other constructs which are not easily evaluated symbolically
like pointers, linked lists, graphs, trees, etc. There are also problems when the data is referenced
indirectly as:
x = (y + k [i]) * 2
The value of i should be known in advance to decide which element of the array k is being
referred to by k[i]. Hence, the use of pointers and arrays may complicate the process of
symbolic execution of a program. Another question that arises is how to handle the function
calls to modules where there is no access to the source code? Although any program can be
written without using pointers, arrays and function calls, but in practice, their usage is quite
popular due to the facilities they offer and may also help to reduce the complexity of the source
code. The above mentioned limitations may reduce the applicability of symbolic execution to
any reasonable size of the program.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
10
#include<stdio.h>
#include<conio.h>
void main()
{
int x,y,a;
clrscr();
printf(Enter values of x and y:\n);
scanf(%d\n %d, &x, &y);
a=x-y;
if(a >= 10)
{
printf(\nx = %d,x);
}
else
{
printf(\ny = %d,y);
}
getch();
}
502
Software Testing
This function is also called objective function and the problem of test data generation is now
reduced to only function minimization. To get the expected input (say a for the program given
in Figure 12.3), we have to find values of x and y that minimizes f(x,y). The objective function
gives an indication to the test generator about its closeness to reaching the goal. The test generator
evaluates function f(x, y) to know how close x and y are to satisfy the present test requirement
being targeted. The test generator may further change the values of x and y and evaluate the
function f(x, y) again to know what changes in x and y bring the input closer to satisfy the
requirement. The test generator may keep on making changes in x and y and evaluates function
f(x, y) until the requirement is satisfied. Finally, the test generator may find values of x and y that
satisfy the targeted requirement. This is a heuristic technique and the objective function definition
is dependent on the goal, which is nothing but the satisfaction of a certain test requirement. The
program may, at the first time, execute on randomly generated input(s) and its behaviour is used
as the basis of a search for a satisfactory input. Hence, using different types of search methods,
the flow can be altered by manipulating the input in a way that the intended branch is taken
[EDVA99]. It may require many iterations before a suitable input is found. Dynamic test data
generation techniques generate a large amount of data during execution of the program to find
expected input(s) for a desired path. Based on test adequacy criteria, a search strategy is adopted
and the program is executed automatically till the chosen criteria is satisfied.
Initial population
Operators such as mutation and crossover with their values.
Parents
Offsprings
P1
P2
Crossover (k)
C1
C2
1.
11001100
10011111
11001111
10011100
2.
11001100
10011111
11001111
10011100
3.
11001100
10011111
11011111
10001100
4.
10000000
11111111
10001111
11110000
5.
10000000
11111111
10000011
11111100
504
Software Testing
Two point crossover operates by selecting two random genes within the parent strings with
subsequent swapping of bits between these two genes. If two parents are [V1, V2.Vm] and
[W1, W2.Wm], and the first randomly chosen point is k with (1 k m-1) and the second
random point is n with (k+1 n m) this would produce the offsprings as:
[(V1, V2.Vk), (Wk+1.Wn), (Vn+1.Vm)] and [(W1, W2,.Wk), (Vk+1, ..Vn), (Wn+1Wm)].
Examples of two point crossover operator is given in Table 12.3.
Table 12.3. Examples of two point crossover operator
Sr. No.
1.
2.
3.
4.
5.
Parents
P1
P2
11001100
10011111
11001100
10011111
11111111
00000000
11111111
00000000
11110000
00001111
Crossover points
k
n
2
6
1
7
2
5
7
8
2
4
C1
11011100
10011110
11000111
11111110
11000000
Offsprings
C2
10001111
11001101
00111000
00000001
00110000
Mutation changes random bits in the binary string. In the binary code, this simply means
changing the state of a gene from 0 to 1 or vice-versa. Mutation is like a random walk through
the search space and is used to maintain diversity in the population and to keep the population
from prematurely converging on one (local) solution. Mutation avoids local optima and creates
genetic material (say input) that may not be present in the current population. Mutation works
by randomly changing the chosen bits from 1 to 0 or from 0 to 1. For example, after crossover,
the generated offspring is [10001111] and the same may be mutated as [10011111] by changing
the 4th bit from 0 to 1. We may also find optimum mutation probability Pm which is the
reciprocal of the chromosome size(s) and is given as: Pm =1/5. It would be unlikely for the code
to have on average more than one bit of a chromosome mutated. If the mutation probability is
too low, there will be insufficient global sampling to prevent convergence to a local optimum.
If the rate of mutation is significantly increased, the location of global optima is delayed. After
the crossover and mutation operations, we have the original population of parents and the new
population of offspring. The survival of parents and offspring depends on the fitness value of
every member of the population and is calculated on the basis of a fitness function.
For example, suppose that the function to optimize is f(A) = A3 where A [0,10]. The fitness
function here may be the same and is defined as:
Fitness = A3
Table 12.4 gives the chromosomes with their fitness values calculated by the fitness function.
Table 12.4.
Sr. No.
Chromosome
A1
Fitness (fi)
1.
2.
3.
4.
C1
C2
C3
C4
2
4
6
1
8
64
216
1
Fitness=
fi = 289
i 1
12.3.4 Selection
The selection operator selects two individuals from a generation to become parents for the
recombination process (crossover and mutation). This selection may be based on fitness value
or could be made randomly. If the fitness value is used, then higher value chromosomes will
be selected.
506
Software Testing
(a)
(a)
The flow chart of the above steps is given in Figure 12.4. The process will iterate until the
population has evolved to form an optimum solution of the problems or until a maximum number
of iterations have taken place. The GA is an evolutionary algorithm where definition of the fitness
function for any search is very important. If the fitness function is effective, desired inputs will
be selected early and may help us to traverse the desired path of the program under testing.
GA generates first generation of data randomly (initial population) and then follows the
steps of the flow chart given in Figure 12.4 to improve the fitness of individuals. On the basis
of fitness value, crossover and mutation operators are used to generate offsprings (2nd
generation individuals). This process continues until all individuals reach the maximum
fitness. The system performs all operations from initial population to the last generation
automatically. It does not require user interference. The automated generated test data may
give better results with reduced effort and time [PRAV09].
Example 12.1: Consider the program to divide two numbers given in Figure 12.5. Generate
test data using genetic algorithm.
#include<stdio.h>
#include<conio.h>
void main()
{
int a, b, c;
printf(Enter value of a and b);
scanf(%d %d,&a, &b);
if(b==0)
{
printf(Invalid Data);
}
else
{
c=a/b;
}
printf(\n a/b= %d,c);
getch();
}
Figure 12.5. Program to divide two numbers
Solution:
The fitness function is given below:
Fitness function
F(x) =
x
y *100
The criteria for selection of mutation and crossover operators are given below:
If
F(x)
The steps for generating test data are given in the following tables. The mutation and
crossover (one-point or two-point) bits are randomly selected.
1. First Generation
S. No.
Operator
1.
Mutation
2.
100
Crossover
3.
80
Mutation
508
Software Testing
a = 5, b=2
00000101 00000010
The 2nd bit is selected randomly for mutation
After mutation
01000101 00000010
a = 100, b=3
01100100 00000011
Two-point crossover is performed. The first randomly chosen point is 4 and the second is 6.
After crossover
01100000 00000111
a = 80, b=5
01010000 00000101
The 7th bit is selected randomly for mutation
After mutation
01010010 00000101
2. Second Generation
S. No.
Operator
1.
69
Crossover
2.
96
Mutation
3.
82
Mutation
a = 69, b=2
01000101 00000010
One-point crossover is performed with randomly chosen point 5.
After crossover
01000010 00000101
a = 96, b=7
01100000 00000111
The 1st bit is selected randomly for mutation
After mutation
11100000 00000111
a = 82, b=5
01010010 00000101
The 1st bit is selected randomly for mutation
After mutation
11010010 00000101
3. Third Generation
S. No.
Operator
1.
66
Mutation
2.
224
Crossover
3.
210
Crossover
a = 66, b=5
01000010 00000101
The 1st bit is selected randomly for mutation
After mutation
11000010 00000101
a = 224, b=7
11100000 00000111
Two-point crossover is performed. The first randomly chosen point is 4 and the second is 6.
After crossover
11100100 00000011
a = 210, b=5
11010010 00000101
Two-point crossover is performed. The first randomly chosen point is 5 and the second is 7.
After crossover
11010100 00000011
4. Fourth Generation
S. No.
Operator
1.
2.
194
Crossover
228
3.
212
a = 194, b=5
11000010 00000101
Two-point crossover is performed. The first randomly chosen point is 5 and the second is 7.
After crossover
11000100 00000011
5. Fifth Generation
S. No.
1.
A
196
b
3
2.
228
3.
212
Operator
The criteria is satisfied after generating 5th generation population. The testing will be stopped
after achieving the defined criteria.
Example 12.2: Consider the program given in Figure 12.1. Generate test data using genetic
algorithm.
Solution:
The fitness function is given below:
F(x) = (10 a8(x, y))/10
if a8(x, y) < 10
510
Software Testing
The criteria for selection of mutation and crossover operators are given below:
If
F(x, y) 0.2, use mutation operator
0.2 < F(x, y) < 0.6, use crossover operator
F(x, y) 0.6, criteria satisfied
The tables below show the steps for generating test data.
1. First Generation
S. No.
Operator
1.
10
Crossover
2.
19
10
Mutation
x = 10, y=7
00001010 00000111
After crossover
00001011 00000110
x = 19, y=10
00010011 00001010
The 14th bit is selected randomly for mutation
After mutation
00010011 00001110
2. Second Generation
S. No.
Operator
1.
11
Crossover
2.
19
14
Crossover
x = 11, y=6
00001011 00000110
Two-point crossover is performed. The first randomly chosen point is 3 and the second is 6.
After crossover
00000111 00001010
x = 19, y=14
00010011 00001110
One-point crossover is performed with randomly chosen point 6.
After crossover
00010010 00001111
3. Third Generation
S. No.
1.
10
2.
18
15
Operator
After the 3rd generation, the criteria are satisfied. Testing will be stopped at this point.
Tool
Language/
Description
platform supported
Remarks
1.
T-VEC
Java/C++
Easy to use
2.
GS Data
Generator
Focuses on version
control and RDBMS
repository
3.
MIDOAN
Ada
Product of MIDOAN
software engineering
solutions for automatic
test data generation.
4.
CONFORMIQ
5.
CA DATAMACS
Mainframe
test Data
Generator (TDG)
or from scratch.
The software industry is focusing more on the quality of a product instead of increasing
functionality. Testing is the most popular and useful way to improve several quality aspects
such as reliability, security, correctness, ease of usage, maintainability, etc. If test data is
generated automatically, it will reduce the effort and time of testing. Although the process of
automated test data generation is still in the early stages, some reasonable success has been
achieved in the industry. Further research is needed to develop effective tools and techniques.
A special effort is also required to increase the flexibility, user friendliness and ease of use of
these tools.
512
Software Testing
EXERCISES
12.1 What is symbolic execution? How is it different from random testing?
12.2 Explain the significance of symbolic execution. What are the advantages and limitations
of this technique?
12.3 What is random testing? Why is it popular in practice?
12.4 List the advantages of automated test data generation over manual test data generation.
12.5 (a) What is dynamic test data generation?
(b) Differentiate between static and dynamic test data generation.
514
Software Testing
12.6 Consider a program to determine the division of a student based on his/her average
marks in three subjects. Design test cases and list constraints of each independent path
using symbolic execution.
12.7 Explain the use of fitness function. Consider a small program and construct its fitness
function.
12.8 What are genetic algorithms? How are they different from traditional exhaustive search
based algorithms?
12.9 Write short notes on the following:
(a) Mutation
(b) One point crossover
(c) Two point crossover
(d) Fitness function
12.10 Explain with the help of a flowchart the functionality of genetic algorithms.
12.11 Consider a program to subtract two numbers. Construct its fitness function and
generate test data using genetic algorithm.
12.12 List and explain genetic algorithm operators. Differentiate between one-point and twopoint crossover.
12.13 Define the following:
(a) Genetic algorithm
(b) Gene
(c) Chromosomes
(d) Initial population
12.14 Effective testing is dependent on the definition of test adequacy criteria. Explain and
comment.
12.15 List various tools for generating test data. Explain their purpose and applicability.
FURTHER READING
The detailed introduction on search based test data generation techniques is presented
in:
P. McMinn Search based Software Test Data Generation: A Survey, Software
Testing, Verification and Reliability, vol. 14, no. 2, pp.105156, June 2004.
An excellent and detailed survey on search based software engineering including
testing and debugging is given in:
Mark Harman, S. Afshin Mansouri and Yuanyuan Zhang, Search Based
Software Engineering: A Comprehensive Analysis and Review of Trends
Techniques and Applications, TR-09-03, April 2009.
A systematic review on automated test data generation techniques can be found at:
Shshid Mahmood, A Systematic Review of Automated Test Data Generation
Techniques, Master Thesis, MSE-2007:26, School of Engineering, Blekinge
Institute of Technology, Ronneby, Sweden, October 2007.
Richard and Offutt provide a new constraint based test data generation technique in:
R.A. DeMillo and A.J. Offutt, Constraint Based Automatic Data Generation,
IEEE Transactions on Software engineering, vol. 17, no. 9, pp. 900910,
September 1991.
Jones and his colleagues present a good technique for generating test cases using
genetic algorithm:
B.F. Jones, H.H. Sthamer and D.E. Eyres, Automatic Structural Testing Using
Genetic Algorithms, Software Engineering Journal, September, pp. 299306,
1996.
Few introductory research papers are listed to understand the concepts of automated
test data generation:
D.C. Ince, The Automatic Generation of Test Adapt, The Computer Journal,
vol. 30, no. 1, pp. 6369, 1987.
Jon Edvardsson, A Survey on Automatic Test Case Generation, In proceedings
of the Second Conference on Computer Science and Engineering in Linkoping,
pp. 2128, ECSEL, October, 1999.
M. Prasanna et al., A Survey on Automatic Test Case Generation, Academic
Open Internet Journal, http://www.acadjournal.com, vol. 15, 2005.
Anastasis A. Sofokleous and Andrew S. Andreou, Automatic Evolutionary Test
Data Generation for Dynamic Software Testing, Journal of Systems and
Software, vol. 81, pp. 18831898, 2008.
Silivia Regina Vergillo et al., Constraint Based Structural Testing Criteria,
Journal of Systems and Software, vol. 79, pp. 756771, 2006.
The following research paper presents the use of genetic algorithm for automatic test
data generation:
Christoph C. Michael, Gray McGraw and Michael A. Schatz, Generating
Software Test Data by Evolution, IEEE Transactions on Software Engineering,
vol. 27, no. 12, 10851110, December, 2001.
An excellent introduction on genetic algorithm can be obtained from:
Goldberg, David E, Genetic Algorithms in Search, Optimization and Machine
Learning, Kluwer Academic Publishers, Boston, MA, 1989.
M. Mitchell, An Introduction to Genetic Algorithms, MIT Press, Cambridge,
MA, 1996.
R. Poli, W.B.. Langdon, N. F McPhee, A Field Guide to Genetic Programming,
www.Lulu.com, 2008,
D. Whitley, A genetic algorithm tutorial, Statistics and Computing, vol. 4, pp.
6585, 1994.
A good Ph.D. work on test data generation using genetic algorithms is available in:
Harman-Hinrich Sthamer, The Automatic Generation of Software Test Data
using Genetic Algorithms, Ph.D. Thesis, University of Glamorgan, U.K.,
November 1995.
Appendix I
PROBLEM STATEMENT
A university is organized in different teaching schools and each school conducts a variety of
programmes. Admissions to the various programmes offered by each school are done through
counselling. Admission slips are issued to the admitted students giving their roll numbers,
name of the school and name of the programme. Students are registered in various schools
manually based on the admission slips. Students are assigned papers (compulsory, elective and
practical) depending upon the scheme of the selected programme. Every school is responsible
for its registration process and the following are prepared and maintained manually:
(i) List of students registered in a programme.
(ii) List of students registered for a particular paper.
(iii) List of papers offered in a particular semester.
(iv) List of faculty in a school.
(v) Personal details of the students.
(vi) Registration card for every registered student.
The university decides to automate the manual registration process in order to improve the
existing system. The proposed system should perform the following functions:
Issue of login Id and password to the members i.e. students and faculty.
Maintain the personal details of the students.
Maintain the details of the faculty.
Maintain the details of the various papers theory (compulsory and elective) and
practical as per the scheme of the programme.
Maintain the semester-wise details of each student.
Issue of registration card to each student every semester.
List of registered students:
Roll number wise
Programme wise
Semester wise
Paper wise
518
Appendix I
CONTENTS
1. Introdtuction
1.1 Purpose
1.2 Scope
1.3 Definitions, Acronyms and Abbreviations
1.4 References
1.5 Overview
2. Overall Description
2.1 Product Perspective
2.1.1 System Interfaces
2.1.2 User Interfaces
2.1.3 Hardware Interfaces
2.1.4 Software Interfaces
2.1.5 Communication Interfaces
2.1.6 Memory Constraints
2.1.7 Operations
2.1.8 Site Adaptation Requirements
2.2 Product Functions
2.3 User Characteristics
2.4 Constraints
2.5 Assumptions and Dependencies
2.6 Apportioning of Requirements
3. Specific Requirements
3.1 External Interface Requirements
3.1.1 User Interfaces
3.1.2 Hardware Interfaces
3.1.3 Software Interfaces
3.1.4 Communication Interfaces
3.2 Functional Requirements
3.2.1 Login
3.2.2 Maintain School Details
3.2.3 Maintain Programme Details
3.2.4 Maintain Scheme Details
3.2.5 Maintain Paper Details
3.2.6 Maintain Student Details
3.2.7 Maintain Faculty Details
3.2.8 Maintain Student Registration Form
3.2.9 Generate Reports
3.2.10 Generate Registration Card
Appendix I 519
3.3
3.4
3.5
3.6
3.7
Performance Requirements
Design Constraints
Software System Attributes
Logical Database Requirements
Other Requirements
Introduction
A university is organized in different teaching schools and each school conducts a variety of
programmes. Admissions to the various programmes offered by each school are done through
counselling. Admission slips are issued to the admitted students giving their roll numbers,
name of the school and the name of the programme.
After admission, every student has to register in the University Registration System (URS)
which is open for a specific period at the beginning of the academic session. Every student has
to obtain a login Id and password from the System Administrator. After successfully logging
on to the system, a student needs to enter his/her personal details in the system. The student
also needs to select elective papers of his/her choice as per the programme scheme. Compulsory
papers (theory and practical) offered in that semester are then assigned automatically. On
submitting the requisite details, a registration card giving the personal information and list of
the papers to be studied during the semester is issued to the student.
Faculty members can also access the URS by obtaining the login Id and password from the
system administrator. They can view the details of the students who have been registered for
various programmes in a school.
1.1 Purpose
The URS maintains information about various papers to be studied by a student in a particular
programme. A paper may either be a theory paper or a practical paper. Theory papers may be
of two types: compulsory paper or elective paper. Compulsory papers are assigned automatically
whereas a student has to select the elective papers of his/her choice in a particular semester.
1.2 Scope
The proposed University Registration System shall perform the following functions:
Issue of login Id and password to the members i.e. students and faculty.
Maintain the personal details of the students.
Maintain the details of the faculty.
Maintain details of the various papers theory (compulsory and elective) and practical
as per the scheme of the programme.
520
Appendix I
1.4 References
(a)
(b)
(c)
(d)
A Practitioners Guide to Software Test Design by Lee Copeland, Artech House, 2004.
Software Engineering by K.K. Aggarwal & Yogesh Singh, New Age Publishing
House, 3rd Edition, 2008.
IEEE Recommended Practice for Software Requirements Specifications IEEE Std
830-1998.
IEEE Standard for Software Test Documentation IEEE Std. 829-1998.
1.5 Overview
The rest of the SRS document describes various system requirements, interfaces, features and
functionalities in detail.
Appendix I 521
2.
Overall Description
The URS registers a student for a semester to a programme offered by a school of a university.
It is assumed that the student has already been admitted in the university for a specific
programme. The system administrator will receive lists of the admitted students (school-wise
and programme-wise) from the academic section responsible for counselling. The establishment
section will provide the list of the faculty members appointed in the school. Based on this
information, the system administrator/Data Entry Operator (DEO) will generate the login Id
and password for the faculty and the students.
The user can access URS on the universitys LAN. Students are permitted to add, modify
and view their information only after they have successfully logged on to the system. After
registration, students can print their registration card. Faculty members can make the query
about the registered students and view/print the information of the registered students, papers
offered in the various programmes, etc. The system administrator is the master user of the URS
and will maintain the records of the school, programme, scheme, paper, students and faculty,
and generate their login Id and password. The system administrator will also be able to
generate the registration card and various reports from the URS. The DEO will be able to
maintain the records of students and faculty.
The administrator will have to maintain the following information:
Login details
School details
Programme details
Scheme details
Paper details
The administrator/DEO will have to maintain the following information:
Student details
Faculty details
The student will be able to add/edit/view the following information:
Student registration details
The administrator/student requires following reports from the proposed system:
Registration card
The administrator/faculty will be able to generate the following reports from the system:
List of registered students
Roll number wise
Programme wise
Semester wise
Paper wise
List of programmes offered by the university.
List of papers offered in a particular semester of a particular programme.
List of faculty in a school.
522
Appendix I
Login: to allow the entry of only authorized users through valid login Id and password.
School Details: to maintain school details.
Programme Details: to maintain programme details.
Scheme Details: to maintain scheme details of a programme.
Paper Details: to maintain paper details of a scheme for a particular programme.
Student Details: to maintain students details that will include personal information.
Faculty Details: to maintain the faculty members details.
Student Registration Details: to maintain details about papers to be studied by a
student in the current semester.
The software should generate the following viewable and printable reports:
(a)
(b)
(c)
(d)
(e)
Registration Card: It will contain the roll number, name of the student, school, programme,
semester and the papers in which the student is registered. The registration card will be
generated after filling the necessary information in the student registration form.
List of Students: It will be generated roll number wise, programme wise, semester
wise and paper wise.
List of Programmes: It will give the details of programmes offered by various schools
of the university.
List of Papers: It will give the list of papers offered in a particular semester for a
particular programme.
List of Faculty: It will give the list of faculty in a school.
Appendix I 523
2.1.7 Operations
None
524
Appendix I
2.4 Constraints
There will be only one administrator.
The delete operation is available to the administrator and DEO (can only delete student
and faculty records). To reduce the complexity of the system, there is no check on the
delete operation. Hence, the administrator/DEO should be very careful before deletion
of any record and he/she will be responsible for data consistency.
The user will not be allowed to update the primary key.
Specific Requirements
This section contains the software requirements in detail along with the various forms to be
developed.
Appendix I 525
Login Form
This will be the first form, which will be displayed. It will allow the user to access
different forms based on his/her role.
Change Password
The change password form facilitates the user to change the password. Various fields
available on this form will be:
Login Id: Numeric of 11 digits in length and only digits from 0 to 9 are allowed.
Alphabets, special characters and blank spaces are not allowed.
Old Password: Alphanumeric in the range of 4 to 15 characters in length. Blank
spaces are not allowed. However, special characters are allowed.
New Password: Alphanumeric in the range of 4 to 15 characters in length. Blank
spaces are not allowed. However, special characters are allowed.
Confirm Password: Alphanumeric in the range of 4 to 15 characters in length.
Blank spaces are not allowed. However, special characters are allowed. The
contents of this field must match with the contents of the new password field.
526
Appendix I
Appendix I 527
Scheme Details
This form will be accessible only to the system administrator. It will allow him/her
to add/edit/delete/view information about new/existing scheme(s) for the school and
programmes that were selected in the Scheme Details form. The list of schools and
programmes available in that particular school will be displayed.
528
Appendix I
Appendix I 529
530
Appendix I
(x)
Appendix I 531
Course: _________
S. No.
Course Code
Course Title
1.
2.
3.
4.
(xi) Generate Reports
The reports will be accessible to the system administrator and faculty. The system will
generate different reports according to the specified criteria.
(i)
(ii)
School
Programme
Roll No.
Name
Semester
532
Appendix I
S. No.
1.
2.
3.
(iv) List of students paper wise:
S. No.
1.
2.
3.
(v)
S. No.
1.
2.
3.
(vi) List of papers in a particular semester:
S. No.
1.
2.
3.
Paper type
Appendix I 533
Faculty name
Designation
1.
2.
3.
3.2.1 Login
A. Validity Checks
(i)
(ii)
(iii)
(iv)
(v)
(vi)
(vii)
(viii)
(ix)
(x)
(xi)
534
Appendix I
3.2.2
School Details
A. Validity Checks
(i) Only the administrator will be authorized to access the Maintain School Details module.
(ii) Every school will have a unique school name.
(iii) The school code cannot be blank.
(iv) The school code cannot contain alphanumeric, special and blank characters.
(v) The school code will have only 3 digits.
(vi) The school name cannot be blank.
(vii) The school name will only accept alphabetic characters and blank spaces.
(viii) The school name cannot accept special characters and numeric digits.
(ix) The school name can have between 10 and 50 characters.
B. Sequencing information
None
D. Error Handling/Response to Abnormal Situations
If any of the validations flow does not hold true, an appropriate error message will be prompted
to the user for doing the needful.
3.2.3
Programme Details
A. Validity Checks
(i)
Only the administrator will be authorized to access the Maintain Programme Details
module.
(ii) Every programme will have a unique programme code and name.
(iii) The school name cannot be blank.
(iv) The programme name cannot be blank.
(v) The programme name can be of 3 to 50 characters in length.
(vi) The programme name can only have alphabets and brackets.
(vii) The programme name cannot have special characters, digits and blank spaces.
(viii) The duration cannot be blank.
(ix) The duration can have a value from 1 to 7.
(x) The number of semesters cannot be blank.
Appendix I 535
Only the administrator will be authorized to access the Maintain Scheme Details
module.
(ii) Every scheme will have a unique semester.
(iii) The school name cannot be blank.
(iv) The programme name cannot be blank.
(v) The number of theory papers cannot be blank.
(vi) The number of theory papers can have a value between 0 and 10.
(vii) The number of elective papers cannot be blank.
(viii) The number of elective papers can have a value between 0 and 10.
(ix) The number of practical papers cannot be blank.
(x) The number of practical papers can have a value between 0 and 10.
(xi) The semester cannot be blank.
(xii) The semester can have a value only between 1 and 14.
(xiii) The total credit cannot be blank.
(xiv) The total credit can have a value between 5 and 99.
B. Sequencing information
The school and programme details will have to be entered into the system before any scheme
details can be entered into the system.
C. Error Handling/Response to Abnormal Situations
If any of the validations/sequencing flow does not hold true, an appropriate error message will
be prompted to the administrator for doing the needful.
3.2.5
Paper Details
A. Validity Checks
(i)
(ii)
Only the administrator will be authorized to access the Maintain Paper Details module.
A scheme will have more than one paper.
536
Appendix I
(iii)
No two semesters will have the same paper i.e. a paper will be offered only in a particular
semester for a given programme.
(iv) The school name cannot be blank.
(v) The programme name cannot be blank.
(vi) The semester cannot be blank.
(vii) The semester can have a value only between 1 and 14.
(viii) The paper code cannot be blank.
(ix) The paper code cannot accept special characters.
(x) The paper code can have both alphabetic and numeric characters.
(xi) The paper code can include blank spaces.
(xii) The paper code can be of 5 to 7 characters in length.
(xiii) The paper name cannot be blank.
(xiv) The paper name can have alphanumeric (alphabets and digits) characters or blank spaces.
(xv) The paper name cannot have special characters.
(xvi) The paper type may be compulsory, elective or practical.
(xvii) The credit cannot be blank.
(xviii) The credit can have a value only between 1 and 30.
B. Sequencing information
School, programme and scheme details will have to be entered into the system before any
paper details can be entered into the system.
C. Error Handling/Response to Abnormal Situations
If any of the validations/sequencing flow does not hold true, an appropriate error message will
be prompted to the user for doing the needful.
3.2.6
Student Details
A. Validity Checks
(i)
Only the administrator/DEO will be authorized to access the Maintain Student Details
module.
(ii) Every student will have a unique roll number.
(iii) The school name cannot be blank.
(iv) The programme name cannot be blank.
(v) The roll number cannot be blank.
(vi) The length of the roll number for any user can only be equal to 11 digits.
(vii) The roll number cannot contain alphabets, special characters and blank spaces.
(viii) The student name cannot be blank.
(ix) The length of the student name can be between 3 and 50 characters.
(x) The student name will only accept alphabetic characters and blank spaces.
(xi) The year of admission cannot be blank.
(xii) The year of admission can have only 4 digits.
(xiii) The password cannot be blank (initially auto generated of 8 digits).
(xiv) The password can be of 4 to 15 characters in length.
(xv) Alphabets, digits and hyphens and underscore characters are allowed in the password
field. However, blank spaces are not allowed.
(xvi) The roll number and login Id are the same.
Appendix I 537
B. Sequencing information
School and programme details will have to be entered into the system before any student
details can be entered into the system.
C. Error Handling/Response to Abnormal Situations
If any of the validations/sequencing flow does not hold true, an appropriate error message will
be prompted to the user for doing the needful.
3.2.7
Faculty Details
A. Validity Checks
(i)
Only the administrator/DEO will be authorized to access the Maintain Faculty Details
module.
(ii) Every faculty member will have a unique employee Id.
(iii) The school name cannot be blank.
(iv) The employee Id cannot be blank.
(v) The length of the employee Id will be equal to 11 digits only.
(vi) The employee Id cannot contain alphabets, special characters and blank spaces.
(vii) The name of the employee cannot be blank.
(viii) The employee name will only accept alphabetic characters and blank spaces. Special
characters are not allowed.
(ix) The designation cannot be blank.
(x) The password cannot be blank (initially auto generated of 8 digits).
(xi) The password can be of 4 to 15 characters in length.
(xii) Alphabets, digits, hyphens and underscore characters are allowed in the password
field. However blank spaces are not allowed.
B. Sequencing information
The school details should be available in the system.
C. Error Handling/Response to Abnormal Situations
If any of the validations/sequencing flow does not hold true, an appropriate error message will
be prompted to the administrator for doing the needful.
3.2.8
Registration Form
A. Validity Checks
(i)
(ii)
(iii)
(iv)
(v)
(vi)
538
Appendix I
3.2.9
Generate Report
A. Validity Checks
(i)
B. Sequencing information
Reports can be generated only after the school, programme, scheme, paper and student
registration details have been entered into the system.
C. Error Handling/Response to Abnormal Situations
If any of the validations/sequencing flow does not hold true, an appropriate error message will
be prompted to the user for doing the needful.
Appendix I 539
B. Sequencing information
The registration card can be generated only after the school, programme, scheme, paper and student
registration details have been entered into the system for that student for the given semester.
Reliability
The applications will be available to the students throughout the registration period and have
a high degree of fault tolerance.
Security
The application will be password protected. Users will have to enter the correct login Id and
password to access the application.
Maintainability
The application will be designed in a maintainable manner. It will be easy to incorporate new
requirements in the individual modules.
Portability
The application will be easily portable on any windows-based system that has SQL Server
installed.
540
Appendix I
Description
Login
School
Programme
Scheme
Paper
Student
Faculty
StudentPaperList
RegistrationOpen
If the registration closes, he/she cannot register. The student may not be
permitted to register more than once in a semester.
Appendix II
2. Actors
Administrator
3. Pre-Conditions
The administrator must be logged onto the system. The School and Programme details for
which the scheme is to be added/updated/deleted/viewed must be available before this use
case begins.
4. Post-Conditions
If the use case is successful, the scheme information is added/updated/deleted/viewed from
the system. Otherwise, the system state is unchanged.
5. Basic Flow
This use case starts when administrator wishes to add/modify/edit/delete/view scheme
information:
(i)
(ii)
Appendix_II.indd 541
The system requests that the administrator specify the function he/she would like to
perform (either Add a scheme, Edit a scheme, Delete a scheme or View a scheme)
Once the administrator provides the requested information, one of the flows is
executed.
8/25/2011 2:47:16 PM
542
Appendix II
(iii) If the administrator selects Add a Scheme, the Add a Scheme flow is executed.
(iv) If the administrator selects Edit a Scheme, the Edit a Scheme flow is executed.
(v) If the administrator selects Delete a Scheme, the Delete a Scheme flow is
executed.
(vi) If the administrator selects View a Scheme, the View a Scheme flow is executed.
Basic Flow 1: Add a Scheme
The system requests that the administrator enters the scheme information. This includes:
(i) The system requests the administrator to select school and programme and also enter
the following information:
1. Semester
2. Number of theory courses
3. Number of elective courses
4. Number of practical courses
5. Total credits
(ii) Once the administrator provides the requested information, scheme is added to the
system.
Basic Flow 2: Edit a Scheme
(i)
The system requests the administrator to select the school name, programme name
and semester.
(ii) The administrator selects the school and programme and also enters the semester.
The system retrieves and displays the scheme information.
(iii) The administrator makes the desired changes to the scheme information. This includes
any of the information specified in the Add a Scheme flow.
(iv) The system prompts the administrator to confirm the updation of the scheme.
(v) After confirming the changes, the system updates the scheme record with the updated
information.
Basic Flow 3: Delete a Scheme
(i)
The system requests that the administrator specify the school name, programme
name and semester.
(ii) The administrator selects the school and programme and also enters the semester.
The system retrieves and displays the scheme information.
(iii) The system prompts the administrator to confirm the deletion of the scheme.
(iv) The administrator confirms the deletion.
(v) The system deletes the scheme record.
Basic Flow 4: View a Scheme
(i)
(ii)
Appendix_II.indd 542
The system requests that the administrator specify the school name, programme
name and semester.
The system retrieves and displays the scheme information.
8/25/2011 2:47:16 PM
Appendix II 543
6. Alternative Flows
Alternative Flow 1: Invalid Entry
If in the Add a Scheme or Edit a Scheme flows, the actor enters invalid semester/Number
of theory papers/number of elective papers/number of practical papers/total credits or leaves
the invalid semester/Number of theory papers/number of elective papers/number of practical
papers/total credits empty, the system displays an appropriate error message. The actor
returns to the basic flow and may reenter the invalid entry.
Alternative Flow 2: Scheme already exist
If in the Add a Scheme flow, a scheme with a specified semester already exists, the system
displays an error message. The administrator returns to the basic flow and may reenter the
scheme.
Alternative Flow 3: Scheme not found
If in the Edit a Scheme or Delete a Scheme or View a Scheme flows, the scheme
information with the specified school name, programme name and semester does not exist,
the system displays an error message. The administrator returns to the basic flow and may
reenter the specified school name, programme name or semester.
Alternative Flow 4: Edit cancelled
If in the Edit a Scheme flow, the administrator decides not to edit the scheme, the edit is
cancelled and the Basic Flow is re-started at the beginning.
Alternative Flow 5: Delete cancelled
If in the Delete a Scheme flow, the administrator decides not to delete the scheme, the
delete is cancelled and the Basic Flow is re-started at the beginning.
Alternative Flow 6: Deletion not allowed
If in the Delete a Scheme flow, paper details of the semester selected exists then the system
displays an error message. The administrator returns to the basic flow and may reenter the
specified school, programme or semester.
Alternative Flow 7: User exits
This allows the user to exit at any time during the use case. The use case ends.
7. Special Requirements
None.
Appendix_II.indd 543
8/25/2011 2:47:16 PM
544
Appendix II
Basic Flow 1
Basic Flow 1 Alternate Flow 1
Basic Flow 1 Alternate Flow 2
Basic Flow 1 Alternate Flow 7
Basic Flow 2
Basic Flow 2 Alternate Flow 1
Basic Flow 2 Alternate Flow 3
Basic Flow 2 Alternate Flow 4
Basic Flow 2 Alternate Flow 7
Basic Flow 3
Basic Flow 3 Alternate Flow 3
Basic Flow 3 Alternate Flow 5
Basic Flow 3 Alternate Flow 6
allowed
Basic Flow 3 Alternate Flow 7
Scenario 15- View a scheme
Basic Flow 4
Basic Flow 4 Alternate Flow 3
Basic Flow 4 Alternate Flow 7
For maintain a scheme use case, we identify nine input variables for various basic flows in
the use case. There are five input variables namely semester, number of theory papers, No. of
elective papers, No. of practical papers, Total credits) and four selection variables (School
selected, Programme Selected, edit confirmed, delete confirmed) in this use case. These
inputs will be available for the respective flows as specified in the use case.
Appendix_II.indd 544
8/25/2011 2:47:16 PM
Appendix_II.indd 545
Input 2
Input 3
Input 4 Input 5
Input 6
do
do
do
do
do
do
TC3
TC4
TC5
TC6
TC7
TC8
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
No
Yes/no
No
Scenario 2Add a scheme
alternative
TC2
Yes
Yes
Scenario 1Yes
Add a scheme
Valid/
invalid
Valid/
invalid
Valid
Valid
Valid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid
Valid
Valid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid
Valid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Input 7 Edit
School Prog.
Semester No. of No. of
No. of
Total
selected selected
theory elective practical credits
papers papers papers
Input 1
TC1
Test Scenario
case Name and
Id
description
Table II-2. Test case matrix for the maintain scheme details use case
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Delete
school
--
credits
practical
papers
papers
papers
Semester
(Contd.)
range
Total credits are
range
range
range
Semester is not in
Programme
not selected programme
School not
selected
Scheme is
Expected
output
Appendix II 545
8/25/2011 2:47:16 PM
Appendix_II.indd 546
Input 3
Yes
Yes
Yes
Yes
TC13 do
TC14 do
TC15 do
TC16 do
alternative
Yes
Yes
Yes
Yes
Yes
Valid
Valid
Valid
Valid/
invalid
Valid
Valid
Valid/
invalid
Valid/
invalid
Valid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid
n/a
n/a
n/a
n/a
n/a
Yes
n/a
n/a
n/a
n/a
n/a
n/a
credits
practical
papers
papers
papers
Semester
Yes
Valid
n/a
TC12 Scenario 6-
Valid
n/a
Scheme is
Valid
Valid /
Yes
Valid /
Scheme
Expected
output
Yes
Valid /
n/a
Delete
TC11 Scenario 5-
Valid /
Input 7 Edit
No. of
Total
practical credits
papers
Valid
Valid
n/a
Input 6
allowed to
Valid /
No. of
elective
papers
Valid
Input 4 Input 5
Yes/no
Input 2
School Prog.
Semester No. of
selected selected
theory
papers
Yes
Yes
Valid
Input 1
TC9
Test Scenario
case Name and
Id
description
(Contd.)
range
Total credits are
(Contd.)
range
range
range
Semester is not in
--
--
546
Appendix II
8/25/2011 2:47:16 PM
Appendix_II.indd 547
Input 2
Yes/no
Yes/no
TC19 Scenario 9-
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
TC21 Scenario
11- Delete
a scheme
alternative
TC22 Scenario
12- Delete
a scheme
alternative
cancelled
Yes
Yes
TC20 Scenario
10- Delete a
scheme
alternative
Input 3
Input 4 Input 5
Input 6
Valid /
n/a
n/a
n/a
Valid /
Valid
n/a
n/a
n/a
n/a
Valid /
Valid
n/a
n/a
n/a
n/a
Valid /
Valid
n/a
n/a
n/a
n/a
Valid /
Valid
n/a
n/a
n/a
n/a
n/a
No
n/a
Input 7 Edit
School Prog.
Semester No. of No. of
No. of
Total
selected selected
theory elective practical credits
papers papers papers
Input 1
TC18 Scenario 8-
alternative
TC17 Scenario 7-
Test Scenario
case Name and
Id
description
(Contd.)
No
n/a
Yes
n/a
n/a
n/a
Delete
--
--
operation
details
appears
(Contd.)
delete operation
programme and
semester does not
Scheme is
allowed to
details
appears
programme and
semester does not
Expected
output
Appendix II 547
8/25/2011 2:47:16 PM
Appendix_II.indd 548
Input 2
Input 3
Input 4 Input 5
Input 6
Yes
Yes
Yes/no
Yes
TC25 Scenario
15- View a
scheme
Yes
TC26 Scenario
16- View a
scheme alternative
Scheme not
Yes/no
TC27 Scenario
17-View a
scheme alternative
Yes
Yes/no
Yes
Valid /
Valid /
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Input 7 Edit
School Prog.
Semester No. of No. of
No. of
Total
selected selected
theory elective practical credits
papers papers papers
Input 1
Yes/no
TC24 Scenario
14- Delete a
scheme alternative
not allowed
TC23 Scenario
13- Delete
a scheme
alternative
Test Scenario
case Name and
Id
description
(Contd.)
n/a
n/a
n/a
n/a
n/a
Delete
allowed to
Scheme not
Scheme
allowed to
Deletion not
allowed
Expected
output
--
--
--
548
Appendix II
8/25/2011 2:47:16 PM
Appendix_II.indd 549
Scenario
School
Name and
selected
description
Scenario 1Add a scheme
do
do
do
do
do
do
Test
case
Id
TC1
TC2
TC3
TC4
TC5
TC6
TC7
TC8
MCA
MCA
MCA
MCA
MCA
12
Prog.
Semester No. of
selected
theory
papers
MCA
1
5
11
No. of
elective
papers
0
12
n/a
n/a
n/a
n/a
n/a
n/a
n/a
No. of
Total
Edit
practical credits
papers
2
22
n/a
Table II-3. Test case matrix with actual data values for the maintain scheme details use case
n/a
n/a
n/a
n/a
n/a
n/a
credits
papers
papers
Programme
School not
selected
5-99
(Contd.)
programme name
school
Scheme is added --
n/a
n/a
Delete
Appendix II 549
8/25/2011 2:47:16 PM
Appendix_II.indd 550
TC14 do
TC15 do
18
TC13 do
alternative
TC12 Scenario 6-
MCA
TC11 Scenario 5-
MCA
11
12
12
26
22
n/a
n/a
n/a
n/a
Yes
n/a
n/a
Prog.
Semester No. of No. of
No. of
Total
Edit
selected
theory elective practical credits
papers papers papers
School
selected
TC10 Scenario 4*
Add a scheme
alternative
TC9
Test Scenario
case Name and
Id
description
(Contd.)
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Delete
papers
papers
Scheme is
--
--
(Contd.)
550
Appendix II
8/25/2011 2:47:16 PM
Appendix_II.indd 551
do
Scenario
Name and
description
TC21 Scenario
11- Delete
a scheme
alternative
TC22 Scenario
12- Delete
a scheme
alternative-
cancelled
TC20 Scenario
10- Delete a
scheme
alternative
TC19 Scenario 9-
n/a
n/a
n/a
n/a
Prog.
Semester No. of
selected
theory
papers
2
5
School
selected
TC18 Scenario 8-
alternative
TC17 Scenario 7-
Test
case
Id
TC16
(Contd.)
n/a
n/a
n/a
n/a
No. of
elective
papers
0
n/a
n/a
n/a
n/a
n/a
n/a
n/a
22
n/a
n/a
n/a
n/a
n/a
No
n/a
No. of
Total
Edit
practical credits
papers
2
187
n/a
No
n/a
Yes
n/a
n/a
n/a
n/a
Delete
scheme details
appears
Scheme not
Scheme is
scheme details
appears
Scheme not
credits
operation
--
operation
(Contd.)
Appendix II 551
8/25/2011 2:47:16 PM
Appendix_II.indd 552
Scenario
Name and
description
Scenario
13- Delete
a scheme
alternative
School
selected
TC27 Scenario
*
17-View a
scheme alter-
Scheme not
*
TC26 Scenario
16- View a
scheme alter-
n/a
n/a
n/a
n/a
Prog.
Semester No. of
selected
theory
papers
2
n/a
TC25 Scenario
15- View a
scheme
not allowed
TC24 Scenario
*
14- Delete a
scheme alter-
Test
case
Id
TC23
(Contd.)
n/a
n/a
n/a
n/a
No. of
elective
papers
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
No. of
Total
Edit
practical credits
papers
n/a
n/a
n/a
n/a
n/a
n/a
Scheme not
Scheme dis-
--
Deletion not
allowed
n/a
n/a
Delete
552
Appendix II
8/25/2011 2:47:16 PM
Appendix II 553
Appendix_II.indd 553
The system requests that the administrator to enter the paper code.
The administrator enters the name of the paper code. The system retrieves and
displays the paper information.
8/25/2011 2:47:16 PM
554
Appendix II
(iii) The administrator makes the desired changes to the paper information. This includes
any of the information specified in the Add a Paper flow.
(iv) The system prompts the administrator to confirm the updation of the paper.
(v) After confirming the changes, the system updates the paper record with the updated
information.
Basic Flow 3: Delete a Paper
(i)
(ii)
The system requests that the administrator specify the paper code.
The administrator enters the paper code. The system retrieves and displays the paper
information.
(iii) The system prompts the administrator to confirm the deletion of the paper.
(iv) The administrator confirms the deletion.
(v) The system deletes the specified paper record.
Basic Flow 4: View a Paper
(i)
The system requests that the administrator specify the paper code.
The system retrieves and displays the paper information.
6. Alternative Flows
Alternative Flow 1: Invalid Entry
If in the Add a Paper or Edit a Paper flows, the actor enters invalid paper code/paper name/
paper type/credits/semester or leaves the paper code/paper name/paper type/credits/semester
empty, the system displays an appropriate error message. The actor returns to the basic flow.
Alternative Flow 2: Paper already exist
If in the Add a Paper flow, a paper code in a specified semester already exists, the system
displays an error message. The administrator returns to the basic flow.
Alternative Flow 3: Paper not found
If in the Edit a Paper or Delete a Paper or View a Paper flows, a paper with the specified
scheme does not exist, the system displays an error message. The administrator returns to
the basic flow.
Alternative Flow 4: Edit cancelled
If in the Edit a Paper flow, the administrator decides not to edit the paper, the edit is
cancelled and the Basic Flow is re-started at the beginning.
Alternative Flow 5: Delete cancelled
If in the Delete a Paper flow, the administrator decides not to delete the paper, the delete is
cancelled and the Basic Flow is re-started at the beginning.
Alternative Flow 6: Deletion not allowed
If in the Delete a Paper flow, student registration details of the paper code in a specified
semester already exists then the system displays an error message. The administrator returns
to the basic flow.
Alternative Flow 7: User exits
This allows the user to quit during the use case. The use case ends.
Appendix_II.indd 554
8/25/2011 2:47:16 PM
Appendix II 555
7. Special Requirements
None.
8. Associated use cases
Login, Maintain School Details, Maintain Programme Details, Maintain Scheme Details,
Maintain Student Details, Maintain Student Registration Details.
Basic Flow 1
Basic Flow 1
Alternate Flow 1
Basic Flow 1
Alternate Flow 2
Basic Flow 1
Alternate Flow 7
Basic Flow 2
Basic Flow 2
Alternate Flow 1
Basic Flow 2
Alternate Flow 3
Basic Flow 2
Alternate Flow 4
Basic Flow 2
Alternate Flow 7
Basic Flow 3
Basic Flow 3
Alternate Flow 3
Basic Flow 3
Alternate Flow 5
Basic Flow 3
Alternate Flow 6
Basic Flow 3
Alternate Flow 7
celled
allowed
Basic Flow 4
Basic Flow 4
Alternate Flow 3
Basic Flow 4
Alternate Flow 7
For maintain a paper use case, we identify nine input variables for various basic flows in the
use case.
Appendix_II.indd 555
8/25/2011 2:47:16 PM
Appendix_II.indd 556
Input 2
No
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
do
do
do
do
do
do
TC3
TC5
TC6
TC7
TC8
TC9
Yes
Yes
Yes
Yes
Yes/no
Scenario 2- No
Add a paper
alternative
Valid
Valid
Valid
Valid
Valid
Valid/
invalid
Valid/
invalid
Valid
Valid
Valid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Paper
name
Input 6 Input 7
Edit
Yes
Yes
Yes
No
Yes/no
Yes/no
Yes/no
Valid
Valid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Yes
No
Yes/no
Yes/no
Yes/no
Yes/no
Yes/no
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Paper
Credits Semester
type
selected
selected
Yes/no Valid/ Yes/no
n/a
invalid
School Prog.
Paper
selected selected code
Input 1
TC2
Test Scenario
case Name and
Id
description
Table II-5. Test case matrix for the maintain paper details use case
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Deletion
select a school
Remarks (if
any)
Semester
not
selected
Paper
credits
paper
name
(Contd.)
select a semester
Paper with the
not selected
Paper name is
not in the spec-
Programme
not
select proselected
gramme
Paper code is
paper code not in the spec-
School not
selected
Expected
output
556
Appendix II
8/25/2011 2:47:17 PM
Appendix_II.indd 557
Input 2
n/a
n/a
TC13 Scenario 6-
celled
TC18 Scenario
n/a
n/a
TC17 Scenario 7-
n/a
n/a
TC16 do
alternative
n/a
n/a
TC15 do
n/a
n/a
n/a
TC14 do
n/a
n/a
n/a
TC12 Scenario 5-
alternative
Yes/no
Valid
Valid
Valid
Valid
Valid
Valid
Valid/
invalid
Valid
n/a
Valid
Valid/
invalid
Valid
Valid/
invalid
Paper
name
Input 6 Input 7
Edit
Yes
n/a
Yes
Yes/no
Valid
n/a
Valid/
invalid
Valid/
invalid
Valid/
invalid
Yes/no
Yes/no
Valid
Yes
Yes
n/a
Yes/No
Yes/no
No
n/a
n/a
n/a
n/a
n/a
Yes/no
Yes/no
Yes
Yes
Paper
Credits Semester
type
selected
selected
n/a
Yes/no Valid/ Yes/no
invalid
School Prog.
Paper
selected selected code
Input 1
Test Scenario
case Name and
Id
description
(Contd.)
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Deletion
--
the maintain
paper details
Remarks (if
any)
details
appears
Paper not
credits
paper
name
operation
(Contd.)
not selected
Paper name is
not in the spec-
Paper code is
paper code not in the spec-
Paper is
allowed to
Expected
output
Appendix II 557
8/25/2011 2:47:17 PM
Appendix_II.indd 558
Deletion not
allowed
n/a
n/a
TC22 Scenario
n/a
12- Delete a
paper alter-
n/a
n/a
n/a
TC21 Scenario
11- Delete
a paper
alternative
Deletion
cancelled
TC23 Scenario
13- Delete a
paper alter-
n/a
Valid
Valid
Valid
Valid
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
TC20 Scenario
n/a
10- Delete a
paper
alternative
TC19 Scenario 9-
Test Scenario
case Name and
Id
description
(Contd.)
n/a
n/a
n/a
n/a
Valid/
invalid
n/a
n/a
n/a
n/a
Yes/no
n/a
n/a
n/a
n/a
n/a
Input 6 Input 7
Edit
Credits Semester
selected
n/a
No
n/a
Yes
n/a
Deletion
Deletion
not allowed
details
appears
Paper not
Paper is
deleted
-
allowed to
Expected
output
(Contd.)
paper code
delete operation
--
the maintain
paper details
Remarks (if
any)
558
Appendix II
8/25/2011 2:47:17 PM
Appendix_II.indd 559
Input 2
n/a
n/a
n/a
n/a
n/a
n/a
n/a
TC25 Scenario
15- View a
paper
TC26 Scenario
16- View
a paper
alternative
TC27 Scenario
17-View a
paper alternative
Valid/
invalid
Valid
Valid
Valid/
invalid
n/a
n/a
n/a
n/a
Paper
name
Input 6 Input 7
Edit
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Paper
Credits Semester
type
selected
selected
n/a
n/a
n/a
n/a
School Prog.
Paper
selected selected code
Input 1
n/a
TC24 Scenario
14- Delete a
paper alternative
Test Scenario
case Name and
description
Id
(Contd.)
n/a
n/a
n/a
n/a
Deletion
allowed to
Paper not
Paper
allowed to
Expected
output
the maintain
paper details
--
the maintain
paper details
Remarks (if
any)
Appendix II 559
8/25/2011 2:47:17 PM
Appendix_II.indd 560
TC1
TC2
do
do
do
do
do
TC3
TC5
TC6
TC7
TC8
alternative
Scenario
Name and
description
Test
case
Id
School
selected
BA607
BA607
MCA
MCA
BA607
BA607
MCA
MCA
MCA
Prog.
Paper
selected code
Discrete
mathematics
Discrete
mathematics
Di
Discrete
mathematics
Paper name
Paper type
selected
32
Credits Semester
Table II-6. Test case matrix with actual data values for the maintain paper details use case
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Edit
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Deletion
selected
programme
name
Paper code
is not in the
selected
school name
--
Remarks (if
any)
credits
(Contd.)
mat/range
Credits are
not in the
not selected
Paper name
paper name is not in the
paper code
Programme
School
Paper is
Expected
output
560
Appendix II
8/25/2011 2:47:17 PM
Appendix_II.indd 561
MCA
n/a
TC12 Scenario 5-
TC13 Scenario 6-
n/a
n/a
n/a
TC14 do
TC15 do
TC16 do
alternative
MCA
n/a
n/a
n/a
n/a
MCA
do
B@A
BA607
BA607
BA607
Prog.
Paper
selected code
TC9
School
selected
Scenario
Name and
description
Test
case
Id
(Contd.)
Discrete
mathematics
Discrete
mathematics
Discrete
mathematics
Paper name
Paper type
selected
32
10
Credits Semester
n/a
n/a
n/a
n/a
Yes
n/a
n/a
n/a
Edit
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Deletion
Paper code
is not in the
--
the maintain
paper details
in the data-
paper code
programme
Paper with
Semester
does not
Remarks (if
any)
credits
(Contd.)
mat/range
Credits are
not in the
not selected
Paper name
paper name is not in the
paper code
Paper is
allowed to
Paper
semester
Expected
output
Appendix II 561
8/25/2011 2:47:17 PM
Appendix_II.indd 562
Scenario 7-
TC17
n/a
TC22 Scenario
n/a
12- Delete a
paper alter-
Deletion
cancelled
n/a
n/a
TC21 Scenario
11- Delete
a paper
alternative
n/a
n/a
n/a
Prog.
Paper
selected code
n/a
n/a
n/a
n/a
School
selected
TC20 Scenario
n/a
10- Delete a
paper
alternative
TC19 Scenario 9-
cancelled
TC18 Scenario
alternative
Scenario
Name and
description
Test
case
Id
(Contd.)
n/a
n/a
Paper name
n/a
n/a
Paper type
selected
n/a
n/a
n/a
n/a
Credits Semester
n/a
n/a
n/a
n/a
No
n/a
Edit
No
n/a
Yes
n/a
n/a
n/a
Deletion
details
appears
Paper not
Paper is
allowed to
details
appears
Paper not
Expected
output
(Contd.)
the delete
operation
in the data-
paper code
Paper with
--
the maintain
paper details
the edit
operation
in the data-
paper code
Paper with
Remarks (if
any)
562
Appendix II
8/25/2011 2:47:17 PM
Appendix_II.indd 563
Scenario
Name and
description
School
selected
n/a
TC27
n/a
n/a
Scenario
17-View a
paper alternative
n/a
n/a
n/a
Prog.
Paper
selected code
n/a
TC25 Scenario
15- View a
paper
TC24
Deletion not
allowed
Scenario
n/a
14- Delete a
paper alternative
TC23 Scenario
n/a
13- Delete a
paper alter-
Test
case
Id
(Contd.)
n/a
n/a
n/a
n/a
n/a
Paper name
n/a
n/a
n/a
n/a
n/a
Paper type
selected
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Credits Semester
n/a
n/a
n/a
n/a
n/a
Edit
n/a
n/a
n/a
n/a
n/a
Deletion
allowed to
Paper not
Paper
allowed to
Deletion not
allowed
Expected
output
cancels the
maintain
paper details
in the data-
paper code
Paper with
--
the maintain
paper details
paper code
Remarks (if
any)
Appendix II 563
8/25/2011 2:47:17 PM
564
Appendix II
(ii)
The system requests that the administrator/DEO specify the function he/she would
like to perform (either Add a student, Edit a student, Delete a student or View a
student)
Once the administrator/DEO provides the requested information, one of the flows
is executed.
If the administrator/DEO selects Add a Student, the Add a Student flow is
executed.
If the administrator/DEO selects Edit a Student, the Edit a Student flow is
executed.
If the administrator/DEO selects Delete a Student, the Delete a Student flow
is executed.
If the administrator/DEO selects View a Student, the View a Student flow is
executed.
Appendix_II.indd 564
Once the administrator/DEO provides the requested information, the system checks
that roll no. is unique and generates password. The student is added to the system.
8/25/2011 2:47:17 PM
Appendix II 565
The system requests that the administrator/DEO specify the roll no. of the student
The administrator/DEO enters the roll no. of the student. The system retrieves and
displays the student information.
(iii) The system prompts the administrator/DEO to confirm the deletion of the student.
(iv) The administrator/DEO confirms the deletion.
(v) The system deletes the student record.
Basic Flow 4: View a Student
(i)
(ii)
The system requests that the administrator/DEO specify the roll no. of the student.
The system retrieves and displays the student information.
6. Alternative Flows
Alternative Flow 1: Invalid Entry
If in the Add a Student or Edit a Student flows, the actor enters invalid roll no/ name/
password or roll no/name/password empty, the system displays an appropriate error
message. The actor returns to the basic flow and may reenter the invalid entry.
Alternative Flow 2: Roll no. already exist
If in the Add a Student flow, a student with a specified roll no. already exists, the system
displays an error message. The administrator/DEO returns to the basic flow and may then
enter a different roll no.
Alternative Flow 3: Student not found
If in the Edit a Student or Delete a Student or View a Student flow, a student with the
specified roll no. does not exist, the system displays an error message. The administrator/
DEO returns to the basic flow and may then enter a different roll no.
Alternative Flow 4: Edit cancelled
If in the Edit a Student flow, the administrator/DEO decides not to edit the student, the edit
is cancelled and the Basic Flow is re-started at the beginning.
Alternative flow 5: Delete cancelled
If in the Delete a Student flow, the administrator/DEO decides not to delete the student, the
delete is cancelled and the Basic Flow is re-started at the beginning.
Appendix_II.indd 565
8/25/2011 2:47:17 PM
566
Appendix II
7. Special Requirements
None.
8. Associated use cases
Login, Maintain Student Registration Details, Maintain School Details, Maintain Programme
Details.
Alternate Flow 1
Basic Flow 1
Alternate Flow 2
Basic Flow 1
Alternate Flow 7
Basic Flow 2
Basic Flow 2
Alternate Flow 1
Basic Flow 2
Alternate Flow 3
Basic Flow 2
Alternate Flow 4
Basic Flow 2
Alternate Flow 7
Basic Flow 3
Basic Flow 3
Alternate Flow 3
Basic Flow 3
Alternate Flow 5
Basic Flow 3
Alternate Flow 6
Basic Flow 3
Alternate Flow 7
allowed
Basic Flow 4
Basic Flow 4
Alternate Flow 3
Basic Flow 4
Alternate Flow 7
For maintain a student use case, we identify eight input variables for various basic flows in
the use case. There are four input variables namely roll no., name, year of admission, password
and four selection variables (School selected, Programme Selected, edit confirmed, delete
confirmed) in this use case. These inputs will be available for the respective flows as specified
in the use case.
Appendix_II.indd 566
8/25/2011 2:47:17 PM
Appendix_II.indd 567
Yes
Yes
Yes
do
Scenario
3- Add a
TC6
TC7
alternative
Yes
Yes
do
TC5
Yes
Yes
Yes
do
TC4
No
Yes
do
TC3
alternative
Yes/no
No
Valid
Valid
Valid
Valid/
invalid
Valid/
invalid
Valid
Valid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid/
invalid
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Scenario
2- Add a
Valid
TC2
Valid
n/a
Yes
Scenario
1- Add a
TC1
Yes
Deletion
Test Scenario
Input 1 Input 2
Input 3 Input 4 Input 5
Input 6
Edit
case Name and School Programme Roll no. Name Year of
Password
Id
description selected selected
admission
Table II-8. Test case matrix for the maintain student details use case
a school name
--
name
(Contd.)
Programme
not
a programme
selected
School not
selected
Expected
output
Appendix II 567
8/25/2011 2:47:17 PM
Appendix_II.indd 568
n/a
n/a
n/a
TC12 do
TC13 do
TC14 Scenario
alterna-
n/a
TC11 do
n/a
n/a
n/a
n/a
n/a
alternative
n/a
Scenario
Valid
Valid
Valid
Valid
Valid
Input 2
Input 3
Programme Roll no.
selected
Yes/no
Valid/
invalid
n/a
TC9
alternative
Test Scenario
Input 1
case Name and School
Id
description selected
Yes/no
TC8 Scenario
4- Add a
(Contd.)
n/a
Valid
Valid
Valid/
invalid
Valid
n/a
Valid/
invalid
Valid/
invalid
n/a
Valid
Valid/
invalid
Valid/
invalid
Valid
n/a
n/a
n/a
n/a
n/a
Yes
Input 4 Input 5
Input 6
Edit
Name Year of
Password
admission
Valid/ Valid/
n/a
n/a
invalid invalid
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Deletion
password
name
allowed to
Expected
output
(Contd.)
Password is not
--
--
568
Appendix II
8/25/2011 2:47:17 PM
Appendix_II.indd 569
n/a
n/a
TC19 Scenario
12-Delete
alternative
Deletion
cancelled
n/a
n/a
TC18 Scenario
11-Delete
n/a
n/a
n/a
TC17 Scenario
n/a
10-Delete a
alternative
TC16 Scenario
Cancelled
alternative
Valid
Valid
Valid
Valid/
invalid
Valid
n/a
n/a
n/a
Valid/
invalid
Valid
n/a
n/a
n/a
Valid/
invalid
n/a
n/a
n/a
Valid/
invalid
Valid
n/a
n/a
n/a
n/a
No
No
n/a
Yes
n/a
n/a
n/a
TC15 Scenario
n/a
Deletion
Input 1 Input 2
Input 3 Input 4 Input 5
Input 6
Edit
Test Scenario
case Name and School Programme Roll no. Name Year of
Password
description selected selected
Id
admission
(Contd.)
details
appears
is deleted
allowed to
details
appears
Expected
output
--
--
--
--
(Contd.)
Appendix II 569
8/25/2011 2:47:18 PM
Appendix_II.indd 570
alternative
TC24 Scenario
17- View
n/a
n/a
n/a
n/a
TC23 Scenario
16-View
alterna-
n/a
n/a
n/a
n/a
Input 3
Valid/
invalid
Valid
Valid
Valid/
invalid
n/a
n/a
n/a
n/a
Input 6
Edit
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Year of
Password
admission
n/a
n/a
n/a
Input 4 Input 5
Input 2
TC22 Scenario
15- View a
alternative
TC21 Scenario
14-Delete
alternative
Deletion
not allowed
Input 1
Test Scenario
case Name and School
description selected
Id
n/a
TC20 Scenario
13-Delete
(Contd.)
n/a
n/a
n/a
allowed to
allowed to
Deletion
not allowed
n/a
n/a
Expected
output
Deletion
--
--
--
570
Appendix II
8/25/2011 2:47:18 PM
Appendix_II.indd 571
Scenario 1-
Scenario 2-
TC1
TC2
do
do
do
Scenario 3-
TC4
TC5
TC6
TC7
alternative
do
TC3
alternative
Scenario
Name and
description
Test
case
Id
School
selected
MCA
MCA
MCA
MCA
Programme
selected
00616453007
00616453007
0061645
00616453007
Roll no.
Valid
Sharma
Sharma
Name
20009
2009
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Year of
Password Edit
admission
Table II-9. Test case matrix with actual data values for the maintain student details use case
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Deletion
Programme
not selected
School not
selected
Expected
output
(Contd.)
contains digits
than 11 digits
select a programme
select a school
name
--
Appendix II 571
8/25/2011 2:47:18 PM
Appendix_II.indd 572
Scenario 4-
TC8
Scenario 6-
TC10
do
Scenario 7-
TC13
TC14
TC15
do
TC12
Cancelled
alternative
Scenario 8-
alternative
do
TC11
alternative
Scenario 5-
TC9
alternative
Scenario
Name and
description
Test
case
Id
(Contd.)
n/a
n/a
n/a
n/a
n/a
n/a
n/a
School
selected
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Programme
selected
Sharma
Sharma
Sharma
Name
00616453007
Sharma
00616453007 n/a
00616453007
00616453007
00616453007
0061645
00616453007
Roll no.
2009
n/a
2009
20009
2009
n/a
ric
n/a
No
n/a
n/a
n/a
n/a
n/a
Yes
n/a
Year of
Password Edit
admission
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Deletion
details
appears
word
Expected
output
--
(Contd.)
Password is not
contains digits
Year is not in the
than 11 digits
--
--
572
Appendix II
8/25/2011 2:47:18 PM
Appendix_II.indd 573
Scenario 9-
TC16
TC21
TC20
TC19
Scenario
11-Delete
TC18
Scenario
14-Delete a
Deletion not
allowed
Scenario
13-Delete a
cancelled
alternative
Scenario
12-Delete
alternative
Scenario
10-Delete a
TC17
alternative
Scenario
Name and
description
Test
case
Id
(Contd.)
n/a
n/a
n/a
n/a
n/a
n/a
School
selected
n/a
n/a
n/a
n/a
n/a
n/a
Programme
selected
*
Name
n/a
00616453007 n/a
00616453007 n/a
00616453007 n/a
00616453007 n/a
Roll no.
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Year of
Password Edit
admission
n/a
n/a
No
n/a
Yes
n/a
Deletion
deletion
screen
Deletion
details
appears
Expected
output
--
(Contd.)
deletion cannot
--
--
--
Appendix II 573
8/25/2011 2:47:18 PM
Appendix_II.indd 574
Scenario
Name and
description
Scenario
15- View a
Scenario
16-View a
alternative
Scenario
17- View a
alternative
Test
case
Id
TC22
TC23
TC24
(Contd.)
n/a
n/a
n/a
School
selected
n/a
n/a
n/a
Programme
selected
Name
n/a
00616453007 n/a
00616453007 n/a
Roll no.
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Year of
Password Edit
admission
n/a
n/a
n/a
Deletion
Expected
output
--
--
574
Appendix II
8/25/2011 2:47:18 PM
Appendix II 575
(ii)
The system requests that the administrator/DEO specify the function he/she would
like to perform (either Add a faculty, Edit a faculty, Delete a faculty or View a
faculty)
Once the administrator/DEO provides the requested information, one of the flows
is executed.
If the administrator/DEO selects Add a Faculty, the Add a Faculty flow is
executed.
If the administrator/DEO selects Edit a Faculty, the Edit a Faculty flow is
executed.
If the administrator/DEO selects Delete a Faculty, the Delete a Faculty flow
is executed.
If the administrator/DEO selects View a Faculty, the View a Faculty flow is
executed.
Appendix_II.indd 575
Once the administrator/DEO provides the requested information, the system checks that
employee Id is unique and generates password. The faculty is added to the system.
8/25/2011 2:47:18 PM
576
Appendix II
The system requests that the administrator/DEO enters the employee Id.
The administrator/DEO enters the employee Id of the faculty. The system retrieves
and displays the faculty information.
(iii) The administrator/DEO makes the desired changes to the faculty information. This
includes any of the information specified in the Add a Faculty flow.
(iv) The system prompts the administrator/DEO to confirm the updation of the faculty.
(v) After confirming the changes, the system updates the faculty record with the updated
information.
The system requests that the administrator/DEO specify the employee Id of the
faculty.
(ii) The administrator/DEO enters the employee Id of the faculty. The system retrieves
and displays the faculty information.
(iii) The system prompts the administrator/DEO to confirm the deletion of the faculty.
(iv) The administrator/DEO confirms the deletion.
(v) The system deletes the faculty record.
The system requests that the administrator/DEO specify the employee Id of the
faculty.
The system retrieves and displays the faculty information.
6. Alternative Flows
Alternative Flow 1: Invalid Entry
If in the Add a Faculty or Edit a Faculty flows, the actor enters invalid employee id/ name/
password or employee id/ name/password empty, the system displays an appropriate error
message. The actor returns to the basic flow.
Alternative Flow 2: Employee Id already exist
If in the Add a Faculty flow, an employee with a specified employee Id already exists, the
system displays an error message. The administrator/DEO returns to the basic flow and may
then enter a different employee.
Alternative Flow 3: Faculty not found
If in the Edit a Faculty or Delete a Faculty or View a Faculty flow, a faculty with the
specified employee Id does not exist, the system displays an error message. The
administrator/DEO returns to the basic flow and may then enter a different employee Id.
Alternative Flow 4: Edit cancelled
If in the Edit a Faculty flow, the administrator/DEO decides not to edit the faculty, the edit
is cancelled and the Basic Flow is re-started at the beginning.
Appendix_II.indd 576
8/25/2011 2:47:18 PM
Appendix II 577
7. Special Requirements
None.
8. Associated use cases
Login, Maintain School Details.
(a)
(b)
(c)
(d)
Figure II-1.
Appendix_II.indd 577
8/25/2011 2:47:18 PM
578
Appendix II
Alternate Flow 1
Basic Flow 1
Alternate Flow 2
Basic Flow 1
Alternate Flow 6
Basic Flow 2
Basic Flow 2
Alternate Flow 1
Basic Flow 2
Alternate Flow 3
Basic Flow 2
Alternate Flow 4
Basic Flow 2
Alternate Flow 6
Basic Flow 3
Basic Flow 3
Alternate Flow 3
Basic Flow 3
Alternate Flow 5
Basic Flow 3
Alternate Flow 6
Basic Flow 4
Basic Flow 4
Alternate Flow 3
Basic Flow 4
Alternate Flow 6
For maintain a faculty use case, we identify seven input variables for various basic flows in
the use case. There are three input variables namely employee id, name, password and four
selection variables (school selected, designation selected, edit confirmed, delete confirmed) in
this use case. These inputs will be available for the respective flows as specified in the use
case.
Appendix_II.indd 578
8/25/2011 2:47:18 PM
Appendix_II.indd 579
Yes
Scenario 3-
TC6
TC8
TC7
Yes
Yes
do
TC5
Scenario
alternative
Scenario 4-
alterna-
No
Yes
do
TC4
n/a
Yes
Valid/
Yes
Yes
Yes
Yes/no
Yes/no
Yes/no
do
Valid/
TC3
Valid/
invalid
Scenario
2-Add a
No
Input 5
Edit
Valid
n/a
n/a
n/a
n/a
n/a
n/a
Yes
n/a
n/a
n/a
n/a
n/a
n/a
Designation Password
selected
Yes
n/a
n/a
Input 4
TC2
Name
Input 3
Scenario 1-
Input 2
School
Employee
selected Id
Yes
Input 1
TC1
Test Scenario
case Name and
Id
description
Table II-11. Test case matrix for the maintain faculty details use case
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Deletion
Designation
not selected
name
School not
selected
Expected
output
--
--
(Contd.)
present in the
designation
select a school
--
Appendix II 579
8/25/2011 2:47:19 PM
Appendix_II.indd 580
n/a
TC13
TC16 Scenario
alternative
cancelled
TC15 Scenario 9-
alternative
n/a
n/a
n/a
n/a
TC12 do
TC14
n/a
TC11 do
alternative
n/a
alternative
Scenario 6-
Input 2
Valid/
invalid
School
Employee
selected Id
n/a
Input 1
TC10 do
TC9
Test Scenario
case Name and
description
Id
(Contd.)
n/a
Valid/
n/a
Valid/
Name
Input 3
Input 5
Edit
n/a
Yes/no
Yes
n/a
No
Yes/no
Yes/no
n/a
Valid/
invalid
Valid
n/a
n/a
Valid/
invalid
n/a
n/a
No
n/a
n/a
n/a
n/a
Designation Password
selected
Yes/no
Valid/
n/a
invalid
Input 4
Yes
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Deletion
appears
Designation
not selected
word
name
Expected
output
--
--
(Contd.)
id is not present
designation
Password is not
580
Appendix II
8/25/2011 2:47:19 PM
Appendix_II.indd 581
TC22
alternative
n/a
Valid/
invalid
n/a
n/a
n/a
n/a
TC21
alternative
Valid/
invalid
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Input 3
Name
Input 2
School
Employee
selected Id
Input 1
TC20
TC19
cancelled
alternative
TC18 Scenario
alternative
TC17 Scenario
Test Scenario
case Name and
description
Id
(Contd.)
Input 5
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Designation Password
selected
Input 4
n/a
n/a
n/a
n/a
n/a
n/a
Edit
n/a
n/a
n/a
n/a
No
n/a
Deletion
appears
Expected
output
--
id is not present
--
--
id is not present
Appendix II 581
8/25/2011 2:47:19 PM
Appendix_II.indd 582
Arvinder
Arvinder
Arvinder
Arvinder
194
194
194
194
194
Scenario 1- Add
Scenario
do
do
do
Scenario 3-
Scenario 4-
TC2
TC3
TC4
TC5
TC6
TC7
Employee Name
Id
Test
case
Id
TC1
n/a
n/a
n/a
n/a
n/a
n/a
Designation Password
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Edit
Table II-12. Test case matrix with actual data values for the maintain faculty details use case
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Deletion
Designation
not selected
name
School name
Expected
output
--
(Contd.)
present in the
designation
contains digits
selected school
name
--
582
Appendix II
8/25/2011 2:47:19 PM
Appendix_II.indd 583
n/a
TC14
n/a
n/a
TC13
TC16
n/a
TC12 do
n/a
n/a
TC11 do
cancelled
TC15 Scenario 9-
n/a
TC10 do
n/a
TC9
Scenario 6-
n/a
Test
case
Id
TC8
(Contd.)
194
194
194
197
194
194
194
196
194
n/a
n/a
n/a
Arvinder *
Arvinder
n/a
Arvinder
Edit
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Arvinder123 n/a
Arvinder123 No
n/a
n/a
Arv
Arvinder123 Yes
Designation Password
Arvinder *
Arvinder
Employee Name
Id
Yes
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Deletion
appears
Designation
not selected
word
name
Expected
output
--
--
(Contd.)
id is not present
designation
contains digits
Password is not
--
Appendix II 583
8/25/2011 2:47:19 PM
Appendix_II.indd 584
194
n/a
n/a
TC21
TC22
194
n/a
TC20
194
n/a
cancelled
alternative
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
194
n/a
Designation Password
Employee Name
Id
TC19
TC18
(Contd.)
n/a
n/a
n/a
n/a
n/a
n/a
Edit
n/a
n/a
n/a
n/a
No
n/a
Deletion
appears
Expected
output
--
id is not present
--
--
to cancel the
delete operation
id is not present
584
Appendix II
8/25/2011 2:47:19 PM
Appendix II 585
5. Basic Flow
A. This use case starts when the student/administrator wishes to add or edit student
registration from.
B. The system requests that the student/administrator specify the function he/she would
like to perform (either Add student registration details, Edit student registration details,
View student registration details)
C. Once the student/administrator provides the requested information, one of the flows
is executed.
If the student selects Add Student Registration Details, the Add Student
Registration Details flow is executed.
If the student/administrator selects Edit Student Registration Details, the Edit
Student Registration Details flow is executed.
Basic Flow 1: Add a Student
The system requests that the student enter his/her information. This includes:
1. Father name
2. Address
3. City
4. State
5. Zip
6. Phone
7. Email
8. Semester
9. Elective papers
Once the student provides the requested information, the student is registered to the
system.
Appendix_II.indd 585
8/25/2011 2:47:19 PM
586
Appendix II
6. Alternative Flows
Alternative Flow 1: Invalid mandatory information
If in the Add Student Registration Details or Edit Student Registration Details flows,
the student enters an invalid value in mandatory fields or leaves the mandatory fields empty,
the system displays an error message. The student returns to the basic flow.
Alternative Flow 2: Sufficient number of electives not selected
If in the Add Student Registration Details or Edit Student Registration Details flow, a
student with does not selects required number of electives, the system displays an error
message. The student returns to the basic flow and may then enter the required number of
electives.
Alternative Flow 3: Edit cancelled
If in the Edit Student Registration Details flow, the student/administrator decides not to
edit his/her information, the edit is cancelled and the Basic Flow is re-started at the
beginning.
Alternative Flow 4: Registration closed
If in the Add Student Registration Details or Edit Student Registration Details flow, the
registration date is closed, student is not allowed to add/edit registration details. The use
case ends
Alternative Flow 5: User Exits
This allows the user to exit during the use case. The use case ends.
7. Special Requirements
None.
8. Associated use cases
Login, Maintain Student Details, Maintain School Details, Maintain Programme Details.
Appendix_II.indd 586
8/25/2011 2:47:19 PM
Appendix II 587
Figure II-12.
C.
Table II-13. Scenario matrix for the maintain registration details use case
Basic Flow 1
Basic Flow 1
Alternate Flow 1
Basic Flow 1
Alternate Flow 2
Basic Flow 1
Alternate Flow 4
Basic Flow 1
Alternate Flow 5
closed
Basic Flow 2
Basic Flow 2
Alternate Flow 1
Basic Flow 2
Alternate Flow 2
Basic Flow 2
Alternate Flow 3
Basic Flow 2
Alternate Flow 4
Basic Flow 2
Alternate Flow 5
closed
For maintain student registration use case, we identify ten input variables for various basic
flows in the use case. There are eight input variables namely fathers name, address, city, state,
zip, phone no., email, electives and one selection variables (Semester selected) in this use case.
These inputs will be available for the respective flows as specified in the use case.
Appendix_II.indd 587
8/25/2011 2:47:19 PM
Appendix_II.indd 588
do
do
do
do
do
TC5
TC6
TC7
TC8
TC9
electives
alterna-
do
TC4
TC10
do
alternative
Scenario
Name and
description
TC3
TC2
TC1
Test
case
Id
Valid
Valid
Valid
Valid
Valid
Valid
Valid
Valid
Valid
Valid
Valid
Valid
Valid
Valid
Valid
Valid/
invalid
Input 1 Input 2
Fathers Address
name
Valid
Valid
Valid
Valid
Valid
Valid
Valid
Valid
Valid/
invalid
Valid/
invalid
Valid
Input 3
City
Valid
Valid
Valid
Valid
Valid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid
Input 4
State
Valid
Valid
Valid
Valid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid
Valid
Valid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Input 5 Input 6
Zip
Phone
no.
Valid
Valid
Table II-14. Test case matrix for the maintain registration details use case
Valid
Valid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Yes
No
Yes/no
Yes/no
Yes/no
Yes/no
Yes/no
Yes/no
Yes/no
Valid/
Valid/
Valid/
Valid/
Valid/
Valid/
Valid/
Valid/
Input 7 Input 8
Input 9
Email Semester Electives
selected
Valid
Yes
No
No
No
No
No
No
No
No
No
No
--
selected
Semester not
selected
Address
(Contd.)
select a semester
--
Address is not
Fathers name is
Fathers name not in the speci-
registered
Reg.
Expected
closed output
588
Appendix II
8/25/2011 2:47:19 PM
Appendix_II.indd 589
do
do
do
do
do
TC16
TC17
TC18
TC19
alternative
Scenario 7
alternative
closed
alterna-
Scenario
Name and
description
TC15
TC14
TC13
TC12
TC11
Test
case
Id
(Contd.)
Valid
Valid
Valid
Valid
Valid
Valid
Valid/
invalid
Valid
Valid
Valid
Valid
Valid/
invalid
Valid
Valid/
invalid
Input 1 Input 2
Fathers Address
name
Valid
Valid
Valid
Valid
Valid
Valid/
invalid
Valid/
invalid
Valid
Valid/
invalid
Valid
Input 3
City
Valid
Valid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid
Valid/
invalid
Valid
Input 4
State
Valid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid
Valid/
invalid
Input 5 Input 6
Zip
Phone
no.
Valid
Valid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Valid/
invalid
Yes/no
Yes/no
Yes/no
Yes/no
Yes/no
Yes/no
Valid/
invalid
Valid/
invalid
Yes
Yes/no
Valid
Valid/
invalid
Valid/
Valid/
Valid/
Valid/
Valid/
Valid/
Valid/
Input 7 Input 8
Input 9
Email Semester Electives
selected
Valid
Yes
No
No
No
No
No
No
No
No
Yes
--
--
is closed
Address
(Contd.)
Address is not
Fathers name is
Fathers name not in the speci-
Reg.
Expected
closed output
Appendix II 589
8/25/2011 2:47:19 PM
Appendix_II.indd 590
do
TC21
closed
Valid/
invalid
Valid
TC24
TC25
Valid
Valid
Valid
Valid/
invalid
Valid
Valid
Valid
Valid
Input 1 Input 2
Fathers Address
name
Valid
Valid
TC23
electives
do
TC20
TC22
Scenario
Name and
description
Test
case
Id
(Contd.)
Valid/
invalid
Valid
Valid
Valid
Valid
Valid
Input 3
City
Valid/
invalid
Valid
Valid
Valid
Valid
Valid
Input 4
State
Valid/
invalid
Valid
Valid
Valid
Valid
Valid/
invalid
Valid
Valid
Valid
Valid
Input 5 Input 6
Zip
Phone
no.
Valid
Valid
Valid/
invalid
Valid
Valid
Valid
Valid
Yes/no
Yes
Yes
Yes
No
Valid/
Valid/
Input 7 Input 8
Input 9
Email Semester Electives
selected
Yes/no
Valid/
No
Yes
No
No
No
No
appears
selected
Semester not
selected
Reg.
Expected
closed output
--
is closed
--
select a semester
--
590
Appendix II
8/25/2011 2:47:19 PM
Appendix_II.indd 591
do
do
do
do
do
TC5
TC6
TC7
TC8
TC9
tives
do
TC4
TC10
do
TC3
TC2
TC1
Sharma
Sharma
Sharma
Sharma
Sharma
Sharma
Sharma
New
Delhi
New
Delhi
New
Delhi
New
Delhi
New
Delhi
New
Delhi
New
Delhi
New
Delhi
New New
Delhi Delhi
New D1
Delhi
D1
12B
Sharma
Bp123@
Phone
no.
110065 24321322
110065 24321322
110065 24321322
110065 234
110065 24321322
State Zip
New New
Delhi Delhi
City
Sharma
Fathers Address
name
com
No
No
No
No
No
No
No
No
No
No
selected
(Contd.)
it contains digits
Zip is not in the
it contains digits
it contains digits
State is not in the
--
Semester
not selected semester
--
Address
Fathers
name
registered
Table II-15. Test case matrix with actual data values for the maintain registration details use case
Appendix II 591
8/25/2011 2:47:20 PM
Appendix_II.indd 592
alternative
TC19 do
TC18 do
TC17 do
TC16 do
TC15 do
alternative
TC14 Scenario 7
TC13
closed
Sharma
Sharma
Sharma
Sharma
Sharma
12B
New New
Delhi Delhi
New New
Delhi Delhi
New D1
Delhi
D1
Bp123@
New New
Delhi Delhi
*
Phone
no.
110065 234
110065 24321322
Valid
State Zip
Valid Valid
City
Sharma
Valid
TC11
TC12
Fathers Address
name
(Contd.)
Yes
Valid
No
No
No
No
No
No
No
No
Yes
Address
Fathers
name
allowed to
Yes
(Contd.)
it contains digits
it contains digits
it contains digits
--
--
592
Appendix II
8/25/2011 2:47:20 PM
Appendix_II.indd 593
TC25
TC24
TC23
TC22
closed
tives
TC21 do
TC20 do
(Contd.)
Valid Valid
Valid
New New
Delhi Delhi
New New
Delhi Delhi
New New
Delhi Delhi
Phone
no.
Valid
110065 24321322
110065 24321322
110065 24321322
110065 24321322
State Zip
New New
Delhi Delhi
City
Sharma
Sharma
Sharma
Sharma
Fathers Address
name
com
Yes
Valid
No
Yes
No
No
No
No
it does not contain
@ sign
allowed to
Yes
appears
registra-
selected
--
--
--
Semester
not selected semester
Appendix II 593
8/25/2011 2:47:20 PM
Appendix III
Case Study: Consider the problem statement of University Registration System (URS).
Design the test cases from the validity checks given in SRS.
A. Scheme Details Form
This form will be accessible only to system administrator. It will allow him/her to add/edit/
delete/view information about new/existing scheme(s) for the school and programme that were
selected in the Scheme Details form. The list of schools and programmes available in that
particular school will be displayed. The list of semesters available in that particular programme
will also be displayed.
Appendix_III.indd 594
8/25/2011 12:08:08 PM
B. Validity Checks
Validity check No.
Description
VC1
VC2
VC3
VC4
VC5
VC6
VC7
VC8
VC9
VC10
VC11
VC12
VC13
VC14
Appendix_III.indd 595
8/25/2011 12:08:09 PM
Appendix_III.indd 596
VC2
VC3
VC4
VC5
VC6
VC7
VC8
VC9
VC10
VC11
TC1
TC2
TC3
TC4
TC5
TC6
TC7
TC8
TC9
TC10
University School of
MCA
Information Technology
University School of
MCA
Information Technology
University School of
MCA
Information Technology
University School of
MCA
Information Technology
University School of
MCA
Information Technology
University School of
MCA
Information Technology
University School of
MCA
Information Technology
University School of
Information Technology
16
11
Prog.
Semester No. of
selected
theory
papers
University School of
MCA
Information Technology
Test
case
Id
11
22
Number of theory
papers cannot be
blank
Semester should be
between 1 to 14
Semester cannot be
blank
--
Please enter
number of theory
papers
Invalid semester
Please enter
semester
Please select
programme
Please select
school
Scheme is added
successfully
No. of
No. of
Total
Expected output
elective practical credits
papers papers
596
Appendix III
8/25/2011 12:08:09 PM
Appendix_III.indd 597
University School of
MCA
Information Technology
TC13 VC14
Figure III-2. Test case with actual data values for the scheme form
University School of
MCA
Information Technology
TC12 VC13
Prog.
Semester No. of
selected
theory
papers
University School of
MCA
Information Technology
TC11 VC12
Test
case
Id
(Contd.)
11
111
Invalid total
credits
Number of theory
total credits should
be between 5 to 99
Total
Expected output
No. of
No. of
elective practical credits
papers papers
8/25/2011 12:08:09 PM
598
Appendix III
B. Validity Checks
Validity check No. Description
VC1
Only Administrator will be authorized to access the Maintain Paper Details module.
VC2
VC3
VC4
No two semesters will have same paper i.e. a paper will be offered only in a
particular semester for a given programme.
School name cannot be blank.
VC5
VC6
VC7
VC8
VC9
VC10
VC11
VC12
VC13
VC14
VC15
Paper name can only have alphanumeric (alphabets and digits) or blank space
characters.
Paper name cannot have special characters.
VC16
VC17
VC18
Appendix_III.indd 598
8/25/2011 12:08:09 PM
Appendix_III.indd 599
VC3
VC4
VC5
VC6
VC7
VC8
TC4
TC5
TC6
TC7
TC8
TC9
TC11 VC10
TC10 VC9
VC3
University School of
MCA
Information Technology
University School of
MCA
Information Technology
University School of
MCA
Information Technology
University School of
MCA
Information Technology
University School of
MCA
Information Technology
University School of
Information Technology
University School of
MCA
Information Technology
*
16
IT105
IT_105
BA607
BA607
BA609
TC3
VC2
TC2
University School of
MCA
Information Technology
University School of
MCA
Information Technology
Test
case
Id
TC1
Discrete
mathematics
*
Discrete
mathematics
Compulsory 4
Compulsory 4
Mathematics-I Compulsory 4
Compulsory 4
Discrete
mathematics
--
--
Please
select
school
Please
select programme
Please enter
semester
Invalid
semester
Credits Expected
output
Paper type
selected
Paper name
8/25/2011 12:08:10 PM
Appendix_III.indd 600
University School of
MCA
Information Technology
University School of
MCA
Information Technology
University School of
MCA
Information Technology
University School of
MCA
Information Technology
University School of
MCA
Information Technology
University School of
MCA
Information Technology
University School of
MCA
Information Technology
University School of
MCA
Information Technology
TC15 VC13
TC18 VC16
TC21 VC17
1
Figure III-4. Test case with actual data values for paper form
TC22 VC18
TC20 VC16
TC19 VC16
TC17 VC15
TC16 VC14
University School of
MCA
Information Technology
MCA
IT105
IT105
IT111
IT156
IT105
IT109
IT105
IT105
Paper type
selected
Please enter
credits
Invalid
credits
Credits cannot be
blank
Credits should
have value
between 1 to 30
Invalid
Paper name canpaper name not contain special characters
-Paper type is
compulsory
-Paper type is
practical
-Paper type is
elective
Please enter
paper name
--
Invalid
paper code
Invalid
paper code
--
Credits Expected
output
Introduction to Compulsory *
computers
Lab-I
Practical
*
Introduction to *
computers
Data_struct
*
IT123455 *
IT
MCA
IT 107
Paper name
Prog.
Semester Paper
selected
code
TC14 VC12
Test
case
Id
TC12
(Contd.)
600
Appendix III
8/25/2011 12:08:10 PM
B. Validity Checks
Validity check
No.
Description
VC1
VC2
VC3
VC4
VC5
VC6
Length of Roll no. for any user can only be equal to 11 digits.
VC7
Roll no. cannot contain Alphabets, special characters and blank spaces.
VC8
VC9
VC10
VC11
VC12
VC13
VC14
VC15
Alphabets, digits and hyphen & underscore characters are allowed in password
VC16
Appendix_III.indd 601
8/25/2011 12:08:10 PM
Appendix_III.indd 602
University School of
MCA
Information Technology
University School of
MCA
Information Technology
University School of
MCA
Information Technology
VC5
VC6
VC7
VC7
VC7
VC8
TC4
TC5
TC6
TC7
TC8
TC9
TC10 VC9
TC11 VC9
University School of
MCA
Information Technology
University School of
MCA
Information Technology
University School of
MCA
Information Technology
University School of
MCA
Information Technology
University School of
MCA
Information Technology
University School of
Information Technology
VC4
TC3
VC3
TC2
University School of
MCA
Information Technology
VC2
Name
00616453007 >50
00616453007 Ru
00616453007
00123 567
006_@546467 *
006tutututu
006164530
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Year of
Password
admission
00616453007 Richa
2009
Sharma
TC1
Invalid student
name
Invalid student
name
Please enter
student name
Length of student
name cannot be more
than 50 characters
(Contd.)
Length of student
name cannot be less
than 3 characters
School not
selected
Student is
-added successfully
Expected
output
602
Appendix III
8/25/2011 12:08:10 PM
Appendix_III.indd 603
University School of
MCA
Information Technology
University School of
MCA
Information Technology
University School of
MCA
Information Technology
University School of
MCA
Information Technology
University School of
MCA
Information Technology
University School of
MCA
Information Technology
University School of
MCA
Information Technology
TC13 VC11
TC14 VC12
TC15 VC13
TC16 VC14
TC17
TC18 VC15
TC19 VC16
Name
99
00616453007 Richa
2009
Sharma
00616453007 Richa
2009
Sharma
00616453007 Richa
2009
Sharma
00616453007 Richa
2009
Sharma
Year of admission
cannot be less than 4
characters
Year of admission
cannot be blank
Invalid password
Invalid password
Invalid password
Invalid year of
admission
Invalid year of
admission
Invalid student
name
Expected
output
01123456 --
011 67
>15
Rj
n/a
n/a
n/a
Year of
Password
admission
00616453007 Richa
2009
Sharma
00616453007
00616453007 Pooja
sinha
00616453007 Pooj_a
Figure III-6. Test case with actual data values for the student form
University School of
MCA
Information Technology
TC12 VC10
(Contd.)
8/25/2011 12:08:10 PM
604
Appendix III
B. Validity Checks
Validity check
No.
Description
VC1
VC2
VC3
VC4
VC5
VC6
VC7
VC8
Faculty name will only accept alphabetic characters and blank spaces and not
accept special characters.
VC9
VC10
VC11
VC12
Alphabets, digits and hyphen & underscore characters are allowed in password
Appendix_III.indd 604
8/25/2011 12:08:11 PM
Appendix_III.indd 605
VC4
VC5
VC5
VC5
VC6
VC7
VC8
VC9
VC10
VC11
VC12
TC4
TC5
TC6
TC7
TC8
TC9
TC10
TC11
TC12
TC13
TC14
Figure III-8. Test case with actual data values for the faculty form
VC3
TC3
Name
Test
Validity
School selected
Employee Id
case Id check No.
TC1
VC1
University School of
194
Information Technology
TC2
VC2
*
--
8/25/2011 12:08:11 PM
606
Appendix III
Appendix_III.indd 606
8/25/2011 12:08:12 PM
B. Validity Checks
Validity check No.
Description
VC1
VC2
VC3
Fathers name cannot include special characters and digits, but blank
spaces are allowed.
VC4
VC5
VC6
VC7
VC8
City cannot include special characters and numeric digits, but blank spaces
are allowed.
VC9
VC10
VC11
State cannot include special characters and numeric digits, but blank spaces
are allowed.
VC12
VC13
VC14
VC15
VC16
VC17
VC18
VC19
VC20
VC21
VC22
VC23
VC24
VC25
VC26
Appendix_III.indd 607
8/25/2011 12:08:12 PM
Appendix_III.indd 608
VC3
VC3
VC4
VC4
VC5
VC6
VC6
TC2
TC3
TC4
TC5
TC6
TC7
TC8
Gupta
Gupta
Gupta
>50
VC2
TC1
>200
E-32
Test
case
id
City
State Zip
Phone
Invalid
address
Please
enter
address
Invalid
address
Invalid
fathers
name
Invalid
fathers
name
Please
enter
fathers
name
Invalid
fathers
name
Invalid
fathers
name
Address cannot
be less than 10
characters
Address name
cannot be
greater than
200 characters
(Contd.)
Fathers name
cannot contain digits
Fathers name
cannot contain special
characters
Fathers name
cannot be
less than 3
characters
Fathers name
cannot be
greater than
50characters
Address cannot be blank
Fathers name
cannot be
blank
608
Appendix III
8/25/2011 12:08:12 PM
Appendix_III.indd 609
TC18
TC17
TC16
TC15
TC14
TC13
TC12
TC11
TC10
Test
case
id
TC9
(Contd.)
De12 *
De
>20
De
Delhi
>20
De12 *
Delhi
Delhi
De_l
Delhi
Delhi
Phone
State Zip
De_l
City
Invalid
state
Invalid
state
Invalid
state
Please
enter
state
Invalid
state
Invalid
city
Invalid
city
Invalid
city
Invalid
city
State cannot
contain special characters
State cannot contain
numeric digits
State cannot
be less than 3
characters
State cannot
be greater
than 20 characters
(Contd.)
City cannot
contain special characters
City cannot contain
numeric digits
City cannot be
less than 3
characters
City cannot be
greater than
20 characters
State cannot
be blank
Please
City cannot be
enter city blank
8/25/2011 12:08:12 PM
Appendix_III.indd 610
TC27 VC18
TC26 VC17
TC25 VC17
TC24 VC17
TC23 VC16
TC22 VC15
TC21 VC14
Gupta
Gupta
Gupta
Gupta
Gupta
Gupta
Gupta
E-32,
Kailash
Colony
E-32,
Kailash
Colony
E-32,
Kailash
Colony
E-32,
Kailash
Colony
E-32,
Kailash
Colony
E-32,
Kailash
Colony
E-32,
Kailash
Colony
Test
case
id
TC19
(Contd.)
Delhi 110ty
Delhi 110,89
Delhi
Delhi
Delhi
Delhi
Delhi
Delhi
Delhi
Phone
Delhi 110045
Delhi 110045
Delhi 110045
Delhi 110045
Delhi 110045
911023444534 *
981 435677
981,yut
981retee
Delhi 1100245 *
Delhi
Delhi
Delhi
State Zip
City
Invalid
phone
Invalid
phone
Invalid
phone
Please
enter
phone
Invalid
phone
Invalid
zip
Invalid
zip
Invalid
zip
Please
enter zip
Zip cannot be
blank
610
Appendix III
8/25/2011 12:08:12 PM
Appendix_III.indd 611
Gupta
Gupta
Gupta
Gupta
Gupta
Gupta
E-32,
Kailash
Colony
E-32,
Kailash
Colony
E-32,
Kailash
Colony
E-32,
Kailash
Colony
E-32,
Kailash
Colony
E-32,
Kailash
Colony
Delhi 110045
Delhi
Delhi
Delhi
Delhi
Delhi
Delhi
26546789
26546789
26546789
26546789
26546789
26546789
26546789
26546789
Phone
Figure III-10. Test case with actual data values for the student registration form
Delhi 110045
Delhi 110045
Delhi 110045
Delhi 110045
Delhi 110045
Delhi 110045
Delhi 110045
Delhi
Delhi
State Zip
City
TC35 VC25
TC34 VC24
TC33 VC23
TC32 VC22
TC31 VC21
TC30 VC21
Test
case
id
TC28
(Contd.)
ruchi@
yahoo.com
ruchi@
yahoo.com
ruchi@
yahoo.com
Ruch@ ya
ho o.com
Ruch@
yahoocom
yes
IT 702
Please
select
semester
Student
is registered
successfully
Invalid
number
of electives
selected
Invalid
email
Invalid
email
Invalid
email
Please
enter
email
Invalid
email
User selected
less number
of electives
Semester
does no contain a elective
paper
Email cannot
be greater
than 50 characters
Email should
contain @
character
Email should
contain . character
Email cannot
contain blank
spaces
Semester is
not selected
Email cannot
be blank
Ruchyahoo. *
com
>50
8/25/2011 12:08:12 PM
References
[AGGA06]
[AGGA04]
[AGGA08]
[ALI10]
[ANDE98]
[ANSI91]
[BACH90]
[BALA07]
[BEIZ90]
[BENT04]
[BERT04]
References 613
[BEIM95]
[BIND94]
[BINK98]
[BITT03]
[BOGU09]
[BOYE75]
[BRUN06]
[CHID94]
[CLAR76]
[COCK01]
[COPE04]
[DANI99]
[EDVA99]
[EDWA06]
[EMAM99]
[GOOD93]
[FENT04]
[FOUR09]
[FINK93]
[FINN96]
614
References
[HETT01]
[HEUM01]
[HUMP02]
[IEEE93]
[IEEE98a]
[IEEE98b]
[IEEE98c]
[IEEE01]
[INCE87]
[JACO99]
[JELI72]
[JONE96]
[JORG07]
[JOSH03]
[KAUR06]
[KEIT91]
[LEE95]
[LI93]
[LION96]
Hatty Baiz and Nancy Costa, Project Audit and Review Checklist, Princeton
Project Office, Princeton University, New jersey, USA, hetty@princeton.edu,
ncosta@princeton.edu, 2001.
J. Heumann, Generating Test Cases from Use Cases, Rational Edge, www.
therationaledge.com, 2001.
W.S. Humphrey, Managing the Software Process, Software Engineering
Institute of Carnegie Mellon University, USA, 2002.
IEEE, IEEE Guide to Software Requirements Specifications (IEEE Std 830
1993), 1993.
IEEE, IEEE Recommended Practice for Software Requirements Specifications
(IEEE Std 8301998), 1998.
IEEE, IEEE Recommended Practice for Software Design Description (IEEE
Std 10161998), 1998.
IEEE, IEEE Standard for Test Documentation (IEEE Std 8291998), 1998.
IEEE, Standard Glossary of Software Engineering Terminology, 2001.
D.C. Ince, The Automatic Generation of Test Data, The Computer Journal, vol.
30, no. 1, pp. 6369, 1998.
I.V. Jacobson et al., Object Oriented Software Engineering, Pearson Education,
1999.
Z. Jelinski and P.B. Moranda, Software Reliability Research in Statistical
Computer Performance Evaluation, Academic Press, NY, 1972.
B.F. Jones, H.H. Sthamer and D.E. Eyres, Automatic Structural Testing using
Genetic Algorithms, Software Engineering Journal, September, 299306,
1996.
P.C. Jorgenson, Software Testing, A Craftsmans Approach, 3rd edition,
Auerbach Publications, USA, 2007.
Joshi S.D., Object Oriented Modeling and Design, Tech-Max, 2003.
Arvinder Kaur, Development of Techniques for Good Quality Object-Oriented
Software, Ph.D. dissertation, Guru Gobind Singh Indraprastha University, Delhi
2006.
Keith and James, Using Program Slicing In Software Maintenance, IEEE
Transactions on Software Engineering, vol. 17, no. 8, pp. 751761 August,
1991.
Y. Lee, B. Liang, S. Wu and F. Wang, Measuring the Coupling and Cohesion of
an Object-Oriented program based on Information flow, Proceedings of
International Conference on Software Quality, Maribor, Slovenia 1995.
W. Li, S. Henry, Object-Oriented Metrics that Predict Maintainability, Journal
of Systems and Software, vol. 23, no. 2, pp. 111122, 1993.
J.L. Lions et al., Report of the Enquiry Board Constituted by Director General
of ESA for the Identification of the Causes of Failure, www.esrin.esa.it, July 19,
Paris, 1996.
References 615
[LORE94]
[MACC76]
[MCGR01]
[MALH09]
[MALH10]
[MANN02]
[MATH08]
[MCMI04]
[MICH01]
[MUSA79]
[MYER04]
[NASA04]
[NORM89]
[PFLE01]
[PRAV09]
[PRESS97]
[QUAT03]
[RAMA76]
[SHNE80]
[SING10]
[STALL01]
616
References
[STEP93]
[TEGA92]
[VOAS95]
[WEIS84]
[WHIT00]
Chapter 1
1.1
(c)
1.2
(b)
1.3
(c)
1.4
(c)
1.5
(a)
1.6
(b)
1.7
(c)
1.8
(a)
1.9
(a)
1.10 (b)
1.11
1.12
1.13
1.14
1.15
1.16
1.17
1.18
1.19
1.20
1.21
1.22
1.23
1.24
1.25
1.26
1.27
1.28
1.29
1.30
(a)
(d)
(c)
(d)
(a)
(d)
(a)
(d)
(c)
(b)
Chapter 2
2.1
(b)
2.2
(a)
2.3
(c)
2.4
(c)
2.5
(d)
2.6
(b)
2.7
(a)
2.8
(c)
2.9
(d)
2.10 (a)
2.11
2.12
2.13
2.14
2.15
2.16
2.17
2.18
2.19
2.20
(c)
(a)
(b)
(b)
(b)
(c)
(d)
(a)
(c)
(c)
Chapter 3
3.1
(a)
3.2
(d)
3.3
(a)
3.4
(b)
3.5
(b)
3.6
(a)
3.7
(a)
3.8
(c)
3.9
(a)
3.10 (b)
3.11
3.12
3.13
3.14
3.15
3.16
3.17
3.18
3.19
3.20
(c)
(c)
(b)
(a)
(a)
(c)
(d)
(a)
(b)
(b)
(d)
(a)
(c)
(c)
(d)
(d)
(c)
(c)
(d)
(a)
1.31
1.32
1.33
1.34
1.35
1.36
1.37
1.38
1.39
1.40
1.41
1.42
1.43
1.44
1.45
1.46
1.47
1.48
1.49
1.50
(b)
(b)
(a)
(a)
(b)
(a)
(b)
(d)
(a)
(d)
2.21
2.22
2.23
2.24
2.25
2.26
2.27
2.28
2.29
2.30
(a)
(c)
(b)
(a)
(b)
(d)
(c)
(d)
(d)
(a)
(c)
(b)
(c)
(d)
(b)
(b)
(a)
(c)
(b)
(a)
618
Chapter 4
4.1
4.2
4.3
4.4
4.5
4.6
4.7
4.8
4.9
4.10
(d)
(d)
(d)
(a)
(a)
(c)
(c)
(b)
(a)
(a)
4.11
4.12
4.13
4.14
4.15
4.16
4.17
4.18
4.19
4.20
(a)
(d)
(c)
(a)
(a)
(b)
(a)
(d)
(d)
(a)
5.11
5.12
5.13
5.14
5.15
5.16
5.17
5.18
5.19
5.20
(a)
(b)
(d)
(a)
(d)
(a)
(c)
(d)
(a)
(b)
6.11
6.12
6.13
6.14
6.15
(b)
(d)
(a)
(a)
(b)
7.11
7.12
7.13
7.14
7.15
(b)
(b)
(b)
(c)
(a)
Chapter 5
5.1
5.2
5.3
5.4
5.5
5.6
5.7
5.8
5.9
5.10
(d)
(a)
(b)
(a)
(c)
(b)
(c)
(b)
(a)
(b)
Chapter 6
6.1
6.2
6.3
6.4
6.5
6.6
6.7
6.8
6.9
6.10
(d)
(c)
(a)
(d)
(b)
(d)
(d)
(a)
(c)
(a)
Chapter 7
7.1
7.2
7.3
7.4
7.5
7.6
7.7
7.8
7.9
7.10
(b)
(c)
(c)
(d)
(a)
(d)
(c)
(d)
(c)
(b)
5.21
5.22
5.23
5.24
5.25
5.26
5.27
5.28
5.29
5.30
(b)
(c)
(d)
(c)
(c)
(d)
(c)
(c)
(a)
(d)
(c)
(b)
(d)
(b)
(a)
(d)
(a)
(d)
(d)
(c)
8.21
8.22
8.23
8.24
8.25
(a)
(d)
(d)
(a)
(b)
(d)
(a)
(c)
(d)
(d)
(a)
(b)
(a)
(d)
(a)
10.21
10.22
10.23
10.24
10.25
(b)
(b)
(c)
(d)
(d)
(c)
(b)
(a)
(d)
(c)
(a)
(b)
(c)
(b)
(d)
11.21
11.22
11.23
11.24
11.25
11.26
11.27
11.28
11.29
11.30
(b)
(a)
(c)
(b)
(a)
(b)
(b)
(c)
(d)
(b)
8.11
8.12
8.13
8.14
8.15
8.16
8.17
8.18
8.19
8.20
(c)
(d)
(a)
(a)
(b)
(c)
(c)
(d)
(a)
(d)
9.11
9.12
9.13
9.14
9.15
9.16
9.17
9.18
9.19
9.20
(a)
(b)
(a)
(b)
(a)
(c)
(d)
(a)
(b)
(d)
10.11
10.12
10.13
10.14
10.15
10.16
10.17
10.18
10.19
10.20
11.11
11.12
11.13
11.14
11.15
11.16
11.17
11.18
11.19
11.20
Chapter 9
9.1
9.2
9.3
9.4
9.5
9.6
9.7
9.8
9.9
9.10
(a)
(b)
(b)
(c)
(c)
(c)
(d)
(c)
(a)
(a)
Chapter 10
10.1
10.2
10.3
10.4
10.5
10.6
10.7
10.8
10.9
10.10
(c)
(d)
(b)
(a)
(a)
(b)
(c)
(c)
(a)
(d)
Chapter 11
11.1
11.2
11.3
11.4
11.5
11.6
11.7
11.8
11.9
11.10
(a)
(d)
(b)
(a)
(c)
(b)
(d)
(b)
(b)
(a)
619
620
Chapter 12
12.1
12.2
12.3
12.4
12.5
12.6
12.7
12.8
12.9
12.10
(a)
(b)
(c)
(b)
(b)
(d)
(a)
(c)
(a)
(d)
12.11
12.12
12.13
12.14
12.15
(c)
(a)
(c)
(a)
(c)
Index
622
Index
deliverables, 22
directed graph (digraph), 110, 118119
disconnected graph, 118
distinct edges, 112
documentation manuals, 19
driver, 369
dynamic software testing tools, 381382
dynamic testing, 24
encapsulation, 394395
equivalence class testing
applicability, 6566
creation of, 6365
graphical representation of inputs, 64
failure intensity, 431432
failure, definition, 21
failure-based failure specification, 428
Fast Track Courier, 393
fault, definition, 21
flow graph generator tools, 380
flows, 290
Fournier, Greg, 286
functional testing techniques
boundary value analysis, 3862
cause-effect graphing technique, 9699
decision table based testing, 8196
equivalence class testing, 6381
robust worst-case testing, 4647
robustness testing, 4344
worst-case testing, 4446
genetic algorithm (GA)
crossover and mutation, 503504
fitness function, 504505
initial population, 503
selection operator, 505
stopping criteria, 505506
graph matrix
of a graph, 115
of the program graph, 150153
Decision to Decision (DD) path graph, 127
definition of graph, 110113
degree of a node, 112113
diagrammatical representation, 110111
directed (digraph), 110
generation from a program, 123127
identification of independent paths of a
program, 144158
matrix representation, 113116
null, 110
paths and independent paths, 116119
regular, 113
Index
simple, 112
undirected, 110
IBM Rationals Performance Tester, 382
IBM Rationals Robot, 382
IEEE Std 830-1998, 285, 299, 382
incidence matrix of a graph, 114
incident on nodes, 110
independent path, in a graph, 116
inspections, 231232
integration testing, 370373
isolated node, 110
Jacobson, Ivar, 285
Jelinski--Moranda model, for reliability
estimation, 422, 437
length of the path, 116
limited entry decision tables, 82
logarithmic poisson execution time model, 422,
434435
logical database requirements, 276279
loop, 111
Marchetti, E., 369
mean absolute relative error (MARE), 443
mean relative error (MRE), 444
Mercury Interactives Load Runner, 382, 511
Mercury Interactives Win Runner, 382, 511
methods, 230232
problem statement of a university registration
system, 257258
software design description (SDD)
verification, 231, 239240
software requirements specification (SRS)
document verification, 231238
source code reviews, 241243
user manuals, 244245
milestones, 22
Minimum program
critical/typical situations, 68
inputs and outputs of, 6
modified, 1014
possible reasons of failures, 89
modification algorithm, 347352
multigraph, 112
Musa, J. D., 430431
basic execution time model, for reliability
estimation, 422
mutant of the original program, 213215
mutant operators, 216
mutation score, associated with a test suite,
216217
mutation testing, 212223
623
624
Index
Index
625
626
Index
walkthroughs, 231
weakly connected graph, 119
web testing
analysis of results and observations, 469
browser testing, 470
configuration and compatibility testing,
469471
database testing, 480482
execution of usability tests, 469
functional testing, 456458
key areas in, 455456
performance testing, 476480
post-deployment testing, 482485
security testing, 471475
user interface testing, 458469
web page metrics, 485486
web server architecture vs client-server
architecture, 453455
weighted graph, 112
worst-case testing, 4446