Basic-Software-Testing - 28 03 2015

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 29

Software Testing:

Testing is a process of executing a program with the intent of finding error.

General definition:
Testing = Verification + Validation
Verification: whether system is right or wrong?
Are we building the product right?"

Validation: whether system is right system or not?


Are we building the right product?"

What should be done during testing?

Confirming product as
 Product that has been developed according to specifications
 Working perfectly
 Satisfying customer requirements

Why should we do testing?


 Error free superior product
 Quality Assurance to the client
 Competitive advantage
 Cut down costs

How to test?

Testing can be done in the following ways:


 Manually
 Automation (By using tools like Rational Robot, WinRunner, LoadRunner, TestDirector
…)
 Combination of Manual and Automation.
TEST PLAN

1.1 What is a Test Plan?


A Test Plan can be defined as a document that describes the scope, approach, resources
and schedule of intended test activities. It identifies test items, the features to be
tested, the testing tasks, who will do each task, and any risks requiring contingency
planning.

The main purpose of preparing a Test Plan is that everyone concerned with the project
are in sync with regards to the scope, responsibilities, deadlines and deliverables for the
project.

Purpose of preparing a Test Plan

A Test Plan is a useful way to think through the efforts needed to validate the
acceptability of a software product.
The completed document will help people outside the test group understand the 'why'
and 'how' of product validation.
It should be thorough enough to be useful but not so thorough that no one outside the
test group will read it.

Contents of a Test Plan


1. Purpose
2. Scope
3. Test Approach
4. Entry Criteria
5. Resources
6. Tasks / Responsibilities
7. Exit Criteria
8. Schedules / Milestones
9. Hardware / Software Requirements
10. Risks & Mitigation Plans
11. Tools to be used
12. Deliverables
13. References
a. Procedures
b. Templates
c. Standards/Guidelines
14. Annexure
15. Sign-Off

Requirement
Study Production Verification
Testing
High Level User Acceptance
Design Testing
Low Level
Design System Testing

Unit Integration
Testing Testing

SDLC - STLC
Let us see a brief definition on the widely employed types of testing.

Unit Testing: The testing done to a unit or to a smallest piece of software. Done to verify if it
satisfies its functional specification or its intended design structure.
Integration Testing: Testing which takes place as sub elements are combined (i.e.,
integrated) to form higher-level elements

Regression Testing: Selective re-testing of a system to verify the modification (bug fixes) have
not caused unintended effects and that system still complies with its specified requirements

System Testing: Testing the software for the required specifications on the intended hardware

Acceptance Testing: Formal testing conducted to determine whether or not a system satisfies
its acceptance criteria, which enables a customer to determine whether to accept the system
or not.

Performance Testing: To evaluate the time taken or response time of the system to perform
it’s required functions in comparison

Stress Testing: To evaluate a system beyond the limits of the specified requirements or
system resources (such as disk space, memory, processor utilization) to ensure the system do
not break unexpectedly

Load Testing: Load Testing, a subset of stress testing, verifies that a web site can handle a
particular number of concurrent users while maintaining acceptable response times
Alpha Testing: Testing of a software product or system conducted at the developer’s site by
the customer

Beta Testing: Testing conducted at one or more customer sites by the end user of a delivered
software product system.

Port Testing: This is to test the installation process.

Some Testing Terminology:-

Monkey / Chimpanzee Testing: The coverage of main activities only in your application during
testing is called as monkey testing.

Exploratory Testing: Level by level of activity coverage of activities in your application during
testing is called exploratory testing.

Sanity Testing: This test is also known as Tester Acceptance Test (TAT). They test for whither
developed team build is stable for complete testing or not?

Configuration Testing: This test is also known as Hardware Compatibility testing. During this
test, test engineer validates that whether our application build supports different technology
i.e. hardware devices or not?

Inter Systems Testing: This test is also known as End-to-End testing. During this test, test
engineer validates that whither our application build coexistence with other existing software
in the customer site to share the resources (H/w or S/w).
Installation Testing: Testing the applications, installation process in customer specified
environment and conditions.

Development Team Released Build

Sanity Test / Tester Acceptance Test

Functional & System Testing

Smoke Testing: An extra shakeup in sanity testing is called as Smoke Testing. Testing team
rejects a build to development team with reasons, before start testing.

Bebugging: Development team releases a build with known bugs to testing them.

Big bang Testing: A single state of testing after completion of all modules development is
called Big bang testing. It is also known as informal testing.

Incremental Testing: A multiple stages of testing process is called as incremental testing. This
is also known as formal testing.

Important Testing Terminology:

 Unit Testing: It concentrates on each unit (Module, Component…) of the software as


implemented in source code.
 Integration Testing: Putting the modules together and construction of software
architecture.
 System and Functional Testing: Product is validated with other system elements are
tested as a whole
 User Acceptance Testing: Testing by the user to collect feed back.

Maintenance: Change associated with error correction, adaptation and enhancements.

 Correction: Changes software to correct defects.


 Adaptation: Modification to the software to accommodate changes to its external
environment.
 Enhancement: Extends the software beyond its original functional requirements.

Prevention: Changes software so that they can be more easily corrected, adapted and
enhanced.

WBT: A coding level testing technique to verify completeness and correctness of the programs.
Also called as Glass BT or Clear BT

Advantages of White Box Testing


 Forces test developer to reason carefully about implementation
 Approximate the partitioning done by execution equivalence
 Reveals errors in "hidden" code
 Beneficent side-effects
Disadvantages of White Box Testing
 Expensive
 Cases omitted in the code could be missed out.

BBT: It is an .exe level of testing technique to validate functionality of an application with


respect to customer requirements. During this test engineer validate internal processing
depends on external interface.

Verification:
- Whether system is right or wrong?
- Are we building the product right?
- Looks at Process compliance.
- Preventive in nature
- IEEE/ANSI definition:
 The process of evaluating a system or component to determine whether
the products of a given development phase satisfy the conditions
imposed at the start of that phase

Validation:
- Whether system is right system or not?
- Are we building the right product?
- Looks at Product compliance.
- Corrective in nature but can be preventive also.
- IEEE/ANSI definition:
 The process of evaluating a system or component during or at the end
of the development process to determine whether it satisfies specified
requirements
- Verification and Validation are complimentary
Grey Box Testing:
http://www.softwaretestengineer.com/free2/software-qa-testing-test-tester-
2210.html
Grey box testing 
DEFINITION

Gray Box Testing is a software testing method which is a combination of Black Box
Testing method and White Box Testing method. In Black Box Testing, the internal structure of
the item being tested is unknown to the tester and in White Box Testing the internal structure
in known. In Gray Box Testing, the internal structure is partially known. This involves having
access to internal data structures and algorithms for purposes of designing the test cases, but
testing at the user, or black-box level.

Gray Box Testing is named so because the software program, in the eyes of the tester is like a
gray/semi-transparent box; inside which one can partially see.

EXAMPLE

An example of Gray Box Testing would be when the codes for two units/modules are studied
(White Box Testing method) for designing test cases and actual tests are conducted using the
exposed interfaces (Black Box Testing method).

LEVELS APPLICABLE TO

Though Gray Box Testing method may be used in other levels of testing, it is primarily useful in
Integration Testing.

ADVANTAGES OF GRAY BOX TESTING


 Determine from the combination of advantages of Black Box Testing and White Box
Testing.

DISADVANTAGES OF GRAY BOX TESTING


 Determine from the combination of disadvantages of Black Box Testing and White Box
Testing.

what is use case? What is the diffrence between te...


Use Case is a series of steps an actor performs on a system to achieve a goal.Test case
describes action event input requirements actual result. Test case have test case name test
case identifier Expected result.And a use case is a technique for capturing the potential
requirements of a new system or softwarechange.

Usecases are High Level requirements and it's describes all Funcationalitys as per user point of view.

Test Cases are Low level requirements and it's describes the each step of the funcationality.

Quality:

 Meet customer requirements


 Meet customer expectations (cost to use, speed in process or performance, security)
 Possible cost
 Time to market

V model:

Build: When coding level testing over. It is a completely integration tested modules. Then it is
called a build. Build is developed after integration testing. (.exe)

Test Management: Testers maintain some documents related to every project. They will refer
these documents for future modifications.
Assessment of Development Plan
Prepare Test Plan
Information Gathering Requirements Phase Testing
& Analysis

Design Phase Testing


Design and Coding Program Phase Testing (WBT)

Functional & System Testing


User Acceptance Testing
Install Build Test Environment Process

Port Testing
Maintenance Test Software Changes
Test Efficiency

Change Request: The request made by the customer to modify the software.

Defect Removel Efficiency:


DRE= a/a+b.
a = Total no of defects found by testers during testing.
b = Total no of defects found by customer during maintenance.

DRE is also called as DD (Defect Deficiency).

BBT, UAT and Test management process where the independent testers or testing team will be
involved.

Reviews during Analysis:


Quality Analyst decides on 5 topics. After completion of information gathering and analysis a
review meeting conducted to decide following 5 factors.

1. Are they complete?


2. Are they correct?
3. Are they achievable?
4. Are they reasonable? ( with respect to cost & time)
5. Are they testable?

Reviews during Design:


After the completion of analysis of customer requirements and their reviews, technical support
people (Tech Leads) concentrate on the logical design of the system. In this every stage they
will develop HLDD and LLDD.
In this review they can apply the below factors.

 Is the design good? (understandable or easy to refer)


 Are they complete? (all the customer requirements are satisfied or not)
 Are they correct? (the design flow is correct or not)
 Are they follow able? (the design logic is correct or not)
 Do they handle error handling? ( the design should be able to specify the positive and
negative flow also)

User Information

User Login Inbox

Invalid User
GUI Testing

What is GUI (Graphic User Interface) Testing?

It is absolutely essential that any application has to be user-friendly. The end user should be
comfortable while using all the components on screen and the components should also perform
their functionality with utmost clarity.GUI Testing can refer to just ensuring that the look-and-
feel of the application is acceptable to the user, or it can refer to testing the functionality of
each and every component involved.

The following is a set of guidelines to ensure effective GUI Testing and can be used even as a
checklist while testing a product / application.

Unit Testing:

After the completion of design and their reviews programmers are concentrating on coding.
During this stage they conduct program level testing, with the help of the WBT techniques. This
WBT is also known as glass box testing or clear box testing.

WBT is based on the code. The senior programmers will conduct testing on programs WBT is
applied at the module level.

There are two types of WBT techniques, such as

1. Execution Testing
 Basis path coverage (correctness of every statement execution.)
 Loops coverage (correctness of loops termination.)
 Program technique coverage (Less no of Memory Cycles and CPU cycles
during execution.)

2. Operations Testing: Whither the software is running under the customer expected
environment platforms (such as OS, compilers, browsers and etc…sys s/w.)

Unit Tests
Intent:
 Uncover errors in an individual unit, often a class
 Provide a safety net for aggressive development
Tests should
 be black box and white box
 be based on unit specifications
 be binary Pass/Fail
 test usual operation
 test unusual (exceptional) operation
 detect if the unit fails to do what it should
 detect if the unit does what it should not

Integration Testing:

After the completion of unit testing, development people concentrate on integration testing,
when they complete dependent modules of unit testing. During this test programmers are
verifying integration of modules with respect to HLDD (which contains hierarchy of modules).
There are two types of approaches to conduct Integration Testing:

 Top-down Approach
 Bottom-up approach.

Stub: It is a called program. It sends back control to main module instead of sub module.
Driver: It is a calling Program. It invokes a sub module instead of main module.

Top-down: This approach starts testing, from the root.

Main Stub

Sub Sub
Module Module
1 2
Bottom-Up: This approach starts testing, from lower-level modules. Drivers are used to
connect the sub modules. ( ex login, create driver to accept default uid and pwd)

Mai
n
Driver

Sub
Module
1

Sub
Module
2
Sandwich: This approach combines the Top-down and Bottom-up approaches of the integration
testing. In this middle level modules are testing using the drivers and stubs.
Mai
n
Driver

Sub
Module
1 Stub

Sub Sub
Module Module
2 3

System Testing: After the completion of Coding and that level tests (U & I) development team
releases a finally integrated all modules set as a build. After receiving a stable build from
development team, separate testing team concentrate on functional and system testing with
the help of BBT.

This testing is classified into 4 divisions.

 Usability Testing (Ease to use or not. Low level Priority in Testing)


 Functional Testing (Functionality is correct or not. Medium Priority in Testing)
 Performance Testing (Speed of Processing. Medium Priority in Testing)
 Security Testing (To break the security of the system. High Priority in Testing)

10.2 Black Box Testing

Black box testing treats the system as a "black-box", so it doesn't explicitly use Knowledge of
the internal structure or programming knowledge.

It focuses on the functionality part of the module. Some people like to call black box testing as
behavioral, functional, opaque-box, and closed-box Behavioral test design is slightly different
from black-box test design because the use of internal knowledge isn't strictly forbidden, but
it's still discouraged.
Input Output
Application

Fig: Black Box testing

There are some bugs that cannot be found using only black box or only white box. If the test
cases are extensive and the test inputs are also from a large sample space then it is always
possible to find majority of the bugs through black box testing.

Tools used for Black Box testing:

The basic functional or regression testing tools capture the results of black box tests in a script
format. Once captured, these scripts can be executed against future builds of an application to
verify that new functionality hasn't disabled previous functionality.

Advantages of Black Box Testing


- Tester can be non-technical.
- This testing is most likely to find those bugs as the user would find.
- Testing helps to identify the vagueness and contradiction in functional specifications.
- Test cases can be designed as soon as the functional specifications are complete

Disadvantages of Black Box Testing


- Chances of having repetition of tests that are already done by programmer.
- The test inputs needs to be from large sample space.
- It is difficult to identify all possible inputs in limited testing time. So writing test cases is slow
and difficult chances of having unidentified paths during this testing

//10.2.1 Graph Based Testing Methods


Software testing begins by creating a graph of important objects and their relationships. Then
devising a series of tests that will cover the graph so that each objects and their relationships.
Devising a series of tests that will cover the graph so that each object and relationship is
exercised and error is uncovered.

10.2.2 Error Guessing


Error Guessing comes with experience with the technology and the project. Error Guessing is
the art of guessing where errors can be hidden. There are no specific tools and techniques for
this, but you can write test cases depending on the situation:

10.2.3 Boundary Value Analysis


Boundary Value Analysis (BVA) is a test data selection technique (Functional Testing technique)
where the extreme values are chosen. Boundary values include maximum, minimum, just
inside/outside boundaries. The hope is that, if a system works correctly for these special values
then it will work correctly for all values in between.
 Extends equivalence partitioning
 Test both sides of each boundary
 Look at output boundaries for test cases too
 Test min, min-1, max, max+1, typical values

 BVA focuses on the boundary of the input space to identify test cases
 Rational is that errors tend to occur near the extreme values of an input variable

//There is two ways to generalize the BVA techniques:

1. By the number of variables


o For n variables: BVA yields 4n + 1 test case.
2. By the kinds of ranges
o Generalizing ranges depends on the nature or type of variables
 Next Date has a variable Month and the range could be defined as
{Jan, Feb, …Dec}
 Min = Jan, Min +1 = Feb, etc.
 Triangle had a declared range of {1, 20,000}
 Boolean variables have extreme values True and False but there is
no clear choice for the remaining three values

Advantages of Boundary Value Analysis


1. Robustness Testing - Boundary Value Analysis plus values that go beyond the limits
2. Min - 1, Min, Min +1, Nom, Max -1, Max, Max +1
3. Forces attention to exception handling
4. For strongly typed languages robust testing results in run-time errors that abort
normal execution

10.2.4 Equivalence Partitioning

Equivalence partitioning is a black box testing method that divides the input domain of a
program into classes of data from which test cases can be derived.

EP can be defined according to the following guidelines:


1. If an input condition specifies a range, one valid and one two invalid classes are defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence classes
are defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence class
is defined.
4. If an input condition is Boolean, one valid and one invalid class is defined.

10.2.5 Comparison Testing


There are situations where independent versions of software are developed for critical
applications. Even when only a single version will be used in the delivered computer based
system. It is these independent versions which form the basis of a black box testing technique
called Comparison testing or back-to-back testing.
1.2 Black box testing Methods

Graph-based Testing Method s


 Black-box methods based on the nature of the relationships (links) among the program
objects (nodes), test cases are designed to traverse the entire graph
 Transaction flow testing (nodes represent steps in some transaction and links represent
logical connections between steps that need to be validated)
 Finite state modeling (nodes represent user observable states of the software and links
represent transitions between states)
 Data flow modeling (nodes are data objects and links are transformations from one
data object to another)
 Timing modeling (nodes are program objects and links are sequential connections
between these objects, link weights are required execution times)

Equivalence Partitioning
 Black-box technique that divides the input domain into classes of data from which test
cases can be derived
 An ideal test case uncovers a class of errors that might require many arbitrary test
cases to be executed before a general error is observed
 Equivalence class guidelines:

1. If input condition specifies a range, one valid and two invalid equivalence
classes are defined
2. If an input condition requires a specific value, one valid and two invalid
equivalence classes are defined
3. If an input condition specifies a member of a set, one valid and one invalid
equivalence class is defined
4. If an input condition is Boolean, one valid and one invalid equivalence class is
defined

 Boundary Value Analysis


 Black-box technique that focuses on the boundaries of the input domain rather than its
center

 BVA guidelines:

1. If input condition specifies a range bounded by values a and b, test cases


should include a and b, values just above and just below a and b
2. If an input condition specifies and number of values, test cases should be
exercise the minimum and maximum numbers, as well as values just above and
just below the minimum and maximum values
3. Apply guidelines 1 and 2 to output conditions, test cases should be designed to
produce the minimum and maxim output reports
4. If internal program data structures have boundaries (e.g. size limitations), be
certain to test the boundaries

 Comparison Testing
 Black-box testing for safety critical systems in which independently developed
implementations of redundant systems are tested for conformance to specifications
 Often equivalence class partitioning is used to develop a common set of test cases for
each implementation
 Orthogonal Array Testing
 Black-box technique that enables the design of a reasonably small set of test cases that
provide maximum test coverage
 Focus is on categories of faulty logic likely to be present in the software component
(without examining the code)
 Priorities for assessing tests using an orthogonal array

1. Detect and isolate all single mode faults


2. Detect all double mode faults
3. Multimode faults

Specialized Testing
 Graphical user interfaces
 Client/server architectures
 Documentation and help facilities
 Real-time systems

1. Task testing (test each time dependent task independently)


2. Behavioral testing (simulate system response to external events)
3. Intertask testing (check communications errors among tasks)
4. System testing (check interaction of integrated system software and hardware)

Advantages of Black Box Testing


 More effective on larger units of code than glass box testing
 Tester needs no knowledge of implementation, including specific programming
languages
 Tester and programmer are independent of each other
 Tests are done from a user's point of view
 Will help to expose any ambiguities or inconsistencies in the specifications
 Test cases can be designed as soon as the specifications are complete

Disadvantages of Black Box Testing


 Only a small number of possible inputs can actually be tested, to test every possible
input stream would take nearly forever
 Without clear and concise specifications, test cases are hard to design
 There may be unnecessary repetition of test inputs if the tester is not informed of test
cases the programmer has already tried
 May leave many program paths untested
 Cannot be directed toward specific segments of code which may be very complex (and
therefore more error prone)

1.3 Testing process and the Software Testing Life Cycle

Every testing project has to follow the waterfall model of the testing process.
The waterfall model is as given below
1. Test Strategy & Planning

2. Test Design

3. Test Environment setup


4. Test Execution

5. Defect Analysis & Tracking

6. Final Reporting

According to the respective projects, the scope of testing can be tailored, but the process
mentioned above is common to any testing activity.
Regression Testing

1.4 What is regression Testing


 Regression testing is the process of testing changes to computer programs to make sure
that the older programming still works with the new changes.
 Regression testing is a normal part of the program development process. Test
department coders develop code test scenarios and exercises that will test new units of
code after they have been written.

1.5 Test Execution


Test Execution is the heart of the testing process. Each times your application changes,
you will want to execute the relevant parts of your test plan in order to locate defects and
assess quality.

Compatibility Testing: This test is also known as portable testing. During this test, test
engineer validates continuity of our application execution on customer expected platforms
(like OS, Compilers, browsers, etc.)

During these compatibility two types of problems arises like


1. Forward compatibility
2. Backward compatibility

Forward compatibility:
The application which is developed is ready to run, but the project technology or environment
like OS is not supported for running.

Build OS

Backward compatibility:
The application is not ready to run on the technology or environment.

Build OS

Instal
Build 1. Setup Program
lation Customer Site
+Required
S/w Like
components to Environment 2. Easy Interface
run
application 3. Occupied Disk Space
The following conditions or tests done in this installation process

 Setup Program: Whither Setup is starting or not?

 Easy Interface: During Installation, whither it is providing easy interface or not?

 Occupied Disk Space: How much disk space it is occupying after the installation?

Sanitation Testing: This test is also known as Garbage Testing. During this test, test engineer
finds extra features in your application build with respect to S/w RS.
Maximum testers may not get this type of problems.

Parallel or Comparative testing: During this test, test engineer compares our application build
with similar type of applications or old versions of same application to find competitiveness.

This comparative testing can be done in two views:


 Similar type of applications in the market.
 Upgraded version of application with older versions.

Performance Testing: It is an advanced testing technique and expensive to apply. During this
test, testing team concentrate on Speed of Processing.

This performance test classified into below subtests.

1. Load Testing
2. Stress Testing
3. Data Volume Testing
4. Storage Testing

Load Testing:
This test is also known as scalability testing. During this test, test engineer
executes our application under customer expected configuration and load to estimate
performance.

Load: No. of users try to access system at a time.

This test can be done in two ways

1. Manual Testing. 2. By using the tool, Load Runner.

Stress Testing:
During this test, test engineer executes our application build under customer
expected configuration and peak load to estimate performance.

Data Volume Testing:


A tester conducts this test to find maximum size of allowable or
maintainable data, by our application build.

Storage Testing:
Execution of our application under huge amounts of resources to estimate
storage limitations to be handled by our application is called as Storage Testing.
Security Testing: It is also an advanced testing technique and complex to apply.
To conduct this tests, highly skilled persons who have security domain knowledge.

This test is divided into three sub tests.

Authorization: Verifies authors identity to check he is an authorized user or not.

Access Control: Also called as Privileges testing. The rights given to a user to do a system task.

Encryption / Decryption:
Encryption- To convert actual data into a secret code which may not be understandable to
others.
Decryption- Converting the secret data into actual data.

Source Encryption Decryption Destination

Client Server

Destination Decryption Encryption Source

User Acceptance Testing: After completion of all possible system tests execution, our
organization concentrate on user acceptance test to collect feed back. To conduct user
acceptance tests, they are following two approaches like Alpha Test and Beta Test.

Note: In s/w development projects are two types based on the products like software
application ( also called as Project ) and Product.

Acceptance tests
Intent:
 Will the client pay the developers?
 Test against specifications
 Often part of a specifications document

See comments on mid semester exam 2003 question 2 specifications

See Brad's Requirements Specifications v1.4

Tests should

 be user-centered
 be based on specifications
 be binary Pass/Fail
 test usual operation
 test unusual (exceptional) operation
 detect if the product fails to do what it should
 detect if the product does what it should not

Manual Vs Automation: A tester conducts a test on application without using any third party
testing tool. This process is called as Manual Testing. A tester conducts a test with the help of
software testing tool. This process is called as Automation.

Fig: Automation Life Cycle

Need for Automation:


When tools are not available they will do manual testing only. If your company already has
testing tools they may follow automation.

For verifying the need for automation they will consider following two types:

Rational Robot is a functional testing tool

Functional testing is not concerned with how quickly or slowly the application does
its job. That would be performance testing.
Functional testing is not directly concerned with how robust the application is.
Functional testing doesn't really care if there are memory leaks unless, of course,
the lack of a robust implementation means the user cannot use the application to do
what it needs to do.
Functional testing is concerned with verifying an application operates as it was
intended. There are various ways to gauge how well an application functions such
as how well it conforms to specifications or requirements. In the simplest terms, you
can ask "Does the application do what the user needs it to do?"
Impact of the test: It indicates test repetition Impact of the test: It indicates test repetition.
Criticality: Load testing, for 1000 users. Criticality indicates complex to apply that test
manually. Impact indicates test repetition.

Retesting: Re execution of our application to conduct same test with multiple test data is
called Retesting.

Regression Testing: The re execution of our test on modified build to ensure bug fix work and
occurrences of side effects is called regression testing.

Any dependent modules may also cause side effects.

Impacted Passed Tests


Modifie
d Build
Failed Tests
Build

11 Test Fail
Development
10 Tests Passed
Testing Policy
C.E.O
Company Level
Test Strategy
Test Manager/
QA / PM
Test Methodology

Test Lead Test Plan

Test Cases

Test Procedure

Project Level
Test Lead, Test Test Script
Engineer

Test Log

Defect Report

Test Lead
Test Summary Report

Fig: Testing Life Cycle

 Error: A human mistake that can happen either intentionally or unintentionally


 Faults: Bugs which appear in a given program
 Failure: Running an input sequence that causes a bug, and/or produces an output that
is different from the specified output.
 One error can result in multiple bugs.
 Multiple errors can result in one bug.
 One bug can have one or more failures.
 Multiple bugs can lead to one or multiple failures.

1.6 Change Request

Initiating a Change Request


A user or developer wants to suggest a modification that would improve an existing application,
notices a problem with an application, or wants to recommend an enhancement.  Any major or
minor request is considered a problem with an application and will be entered as a change
request.

Type of Change Request


Bug the application works incorrectly or provides incorrect information. (For example,
a letter is allowed to be entered in a number field)
 Change a modification of the existing application.  (For example, sorting the files
alphabetically by the second field rather than numerically by the first field makes them
easier to find)
 Enhancement new functionality or item added to the application. (For example, a new
report, a new field, or a new button)
//Responsibility fog Quality Assurance

Priority for the request:


 Critical Problem; testing can continue but we cannot go into production (live) with this
problem
 Major Problem; testing can continue but live this feature will cause severe disruption
to business processes in live operation
 Medium Problem; testing can continue and the system is likely to go live with only
minimal departure from agreed business processes
 Minor Problem; both testing and live operations may progress. This problem should be
corrected, but little or no changes to business processes are envisaged
 Cosmetic' Problem e.g. colors; fonts; pitch size However, if such features are key to
the business requirements they will warrant a higher severity level.
The users of the system, in consultation with the executive sponsor of the project, must then
agree upon the responsibilities and required actions for each category of problem.

1.7 Bug Tracking


 Locating and repairing software bugs is an essential part of software development.
 Bugs can be detected and reported by engineers, testers, and end-users in all phases of
the testing process.
 Information about bugs must be detailed and organized in order to schedule bug fixes
and determine software release dates.
Bug Tracking involves two main stages: reporting and tracking.

Report Bugs
Once you execute the manual and automated tests in a cycle, you report the bugs (or defects)
that you detected. The bugs are stored in a database so that you can manage them and analyze
the status of your application.
When you report a bug, you record all the information necessary to reproduce and fix it. You
also make sure that the QA and development personnel involved in fixing the bug are notified.

Track and Analyze Bugs


The lifecycle of a bug begins when it is reported and ends when it is fixed, verified, and closed.
 First you report new bugs to the database, and provide all necessary information to
reproduce, fix, and follow up the bug.
 The Quality Assurance manager or Project manager periodically reviews all new
bugs and decides which should be fixed. These bugs are given the status Open and
are assigned to a member of the development team.
 Software developers fix the Open bugs and assign them the status Fixed.
 QA personnel test a new build of the application. If a bug does not reoccur, it is
closed. If a bug is detected again, it is reopened.
Communication is an essential part of bug tracking; all members of the development and
quality assurance team must be well informed in order to insure that bugs information is up to
date and that the most important problems are addressed.
The number of open or fixed bugs is a good indicator of the quality status of your application.
You can use data analysis tools such as re-ports and graphs in interpret bug data.
Defect Life Cycle
1.8 Defect Tracking

After a defect has been found, it must be reported to development so that it can be fixed.

 The Initial State of a defect will be ‘New’.

 The Project Lead of the development team will review the defect and set it to one of
the following statuses:
Open – Accepts the bug and assigns it to a developer.
Invalid Bug – The reported bug is not valid one as per the requirements/design
As Designed – This is an intended functionality as per the requirements/design
Deferred –This will be an enhancement.
Duplicate – The bug has already been reported.
Document – Once it is set to any of the above statuses apart from Open, and the
testing team does not agree with the development team it is set to document status.

 Once the development team has started working on the defect the status is set to WIP
((Work in Progress) or if the development team is waiting for a go ahead or some
technical feedback, they will set to Dev Waiting

 After the development team has fixed the defect, the status is set to FIXED, which
means the defect is ready to re-test.

 On re-testing the defect, and the defect still exists, the status is set to REOPENED,
which will follow the same cycle as an open defect.

 If the fixed defect satisfies the requirements/passes the test case, it is set to Closed.

1.9 Traceability Matrix

A traceability matrix is created by associating requirements with the products that satisfy
them.  Tests are associated with the requirements on which they are based and the product
tested to meet the requirement.  Below is a simple traceability matrix structure.  There can be
more things included in a traceability matrix than shown below.  Traceability requires unique
identifiers for each requirement and product. Numbers for products are established in a
configuration management (CM) plan.
SAMPLE TRACEABILITY MATRIX

1.10 Need for Test Strategy

The objective of testing is to reduce the risks inherent in computer systems. The strategy must
address the risks and present a process that can reduce those risks. The system concerns on
risks then establish the objectives for the test process. The two components of the testing
strategy are the Test Factors and the Test Phase.

Analysis Coding
Errors 36%
and
design
Errors 64%

 Test Factor – The risk or issue that needs to be addressed as part of the test strategy.
The strategy will select those factors that need to be addressed in the testing of a
specific application system.
 Test Phase – The Phase of the systems development life cycle in which testing will
occur.

Test Reporting

A final test report should be prepared at the conclusion of each test activity. This includes the
following
 Individual Project Test Report
 Integration Test Report
 System Test Report
 Acceptance test Report

Contents of a Test Report


The contents of a test report are as follows:

Executive Summary
Overview
Application Overview
Testing Scope
Test Details
Test Approach
Types of testing conducted
Test Environment
Tools Used
Metrics
Test Results
Test Deliverables
Recommendations

Test Case Definition

 A test case should contain the following attributes


- Test case Identity
- Title
- Pre-conditions
- Test setup
- Input parameters
- Procedure
- Expected output
- Special observations
- Mapping to requirements (optional)

A test case identifies the specific input values that will be sent to the application, the
procedures
for applying those inputs, and the expected application values for the procedure being tested.
A
proper test case will include the following key components:

Test Case Name(s) - Each test case must have a unique name, so that the results of these test
elements can be traced and analyzed.
Test Case Prerequisites - Identify set up or testing criteria that must be established before a
test can be successfully executed.
Test Case Execution Order - Specify any relationships, run orders and dependencies that might
exist between test cases.
Test Procedures – Identify the application steps necessary to complete the test case.
Input Values - This section of the test case identifies the values to be supplied to the
application as input including, if necessary, the action to be completed.
Expected Results - Document all screen identifier(s) and expected value(s) that must be
verified as part of the test. These expected results will be used to measure the acceptance
criteria, and
therefore the ultimate success of the test.
Test Data Sources - Take note of the sources for extracting test data if it is not included in the
test case.

1.10.1 Test Case Classification


Complexity Type Complexity of Test Case Interface withNumber ofBaseline Test
other Test case verification Data
points
Simple < 2 transactions 0 <2 Not Required
Average 3-6 transactions <3 3-8 Required
Complex > 6 transactions >3 >8 Required

A sample guideline for classification of test cases is given below.

 Any verification point containing a calculation is considered 'Complex'


 Any verification point, which interfaces with or interacts with another application is
classified as 'Complex'
 Any verification point consisting of report verification is considered as 'Complex'

 A verification point comprising Search functionality may be classified as 'Complex' or


'Average' depending on the complexity

Test CaseComplexity Adjustment


Type Weight Factor Number Result
No of Simple requirements in the Number*Adjust factor A
Simple 1 2(A) project (R1)
No of Average requirements in the Number*Adjust factor B
Average 2 4(B) project (R2)
No of Complex requirements inNumber*Adjust factor C
Complex 3 8(C) the project (R3)
Total Test
Case Points       R1+R2+R3

Responsibilities of a tester

 Participate in the reviews of Requirement documents,


 Design, Test strategy and Test plan documents and validate them
 Participate in code walkthroughs if involved in integration testing and validate them
 Contribute towards maintaining the requirement traceability (traceability of
requirements against SRS/FFD, HLD/LLD, Test Plan and Test Results)
 Try to achieve 100% test coverage
 Own the responsibility of achieving highest level of quality
 Be responsible for proper test planning and de-risking
 Better co-ordination with development team
 Early and judicial escalation of issues

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy