Basic-Software-Testing - 28 03 2015
Basic-Software-Testing - 28 03 2015
Basic-Software-Testing - 28 03 2015
General definition:
Testing = Verification + Validation
Verification: whether system is right or wrong?
Are we building the product right?"
Confirming product as
Product that has been developed according to specifications
Working perfectly
Satisfying customer requirements
How to test?
The main purpose of preparing a Test Plan is that everyone concerned with the project
are in sync with regards to the scope, responsibilities, deadlines and deliverables for the
project.
A Test Plan is a useful way to think through the efforts needed to validate the
acceptability of a software product.
The completed document will help people outside the test group understand the 'why'
and 'how' of product validation.
It should be thorough enough to be useful but not so thorough that no one outside the
test group will read it.
Requirement
Study Production Verification
Testing
High Level User Acceptance
Design Testing
Low Level
Design System Testing
Unit Integration
Testing Testing
SDLC - STLC
Let us see a brief definition on the widely employed types of testing.
Unit Testing: The testing done to a unit or to a smallest piece of software. Done to verify if it
satisfies its functional specification or its intended design structure.
Integration Testing: Testing which takes place as sub elements are combined (i.e.,
integrated) to form higher-level elements
Regression Testing: Selective re-testing of a system to verify the modification (bug fixes) have
not caused unintended effects and that system still complies with its specified requirements
System Testing: Testing the software for the required specifications on the intended hardware
Acceptance Testing: Formal testing conducted to determine whether or not a system satisfies
its acceptance criteria, which enables a customer to determine whether to accept the system
or not.
Performance Testing: To evaluate the time taken or response time of the system to perform
it’s required functions in comparison
Stress Testing: To evaluate a system beyond the limits of the specified requirements or
system resources (such as disk space, memory, processor utilization) to ensure the system do
not break unexpectedly
Load Testing: Load Testing, a subset of stress testing, verifies that a web site can handle a
particular number of concurrent users while maintaining acceptable response times
Alpha Testing: Testing of a software product or system conducted at the developer’s site by
the customer
Beta Testing: Testing conducted at one or more customer sites by the end user of a delivered
software product system.
Monkey / Chimpanzee Testing: The coverage of main activities only in your application during
testing is called as monkey testing.
Exploratory Testing: Level by level of activity coverage of activities in your application during
testing is called exploratory testing.
Sanity Testing: This test is also known as Tester Acceptance Test (TAT). They test for whither
developed team build is stable for complete testing or not?
Configuration Testing: This test is also known as Hardware Compatibility testing. During this
test, test engineer validates that whether our application build supports different technology
i.e. hardware devices or not?
Inter Systems Testing: This test is also known as End-to-End testing. During this test, test
engineer validates that whither our application build coexistence with other existing software
in the customer site to share the resources (H/w or S/w).
Installation Testing: Testing the applications, installation process in customer specified
environment and conditions.
Smoke Testing: An extra shakeup in sanity testing is called as Smoke Testing. Testing team
rejects a build to development team with reasons, before start testing.
Bebugging: Development team releases a build with known bugs to testing them.
Big bang Testing: A single state of testing after completion of all modules development is
called Big bang testing. It is also known as informal testing.
Incremental Testing: A multiple stages of testing process is called as incremental testing. This
is also known as formal testing.
Prevention: Changes software so that they can be more easily corrected, adapted and
enhanced.
WBT: A coding level testing technique to verify completeness and correctness of the programs.
Also called as Glass BT or Clear BT
Verification:
- Whether system is right or wrong?
- Are we building the product right?
- Looks at Process compliance.
- Preventive in nature
- IEEE/ANSI definition:
The process of evaluating a system or component to determine whether
the products of a given development phase satisfy the conditions
imposed at the start of that phase
Validation:
- Whether system is right system or not?
- Are we building the right product?
- Looks at Product compliance.
- Corrective in nature but can be preventive also.
- IEEE/ANSI definition:
The process of evaluating a system or component during or at the end
of the development process to determine whether it satisfies specified
requirements
- Verification and Validation are complimentary
Grey Box Testing:
http://www.softwaretestengineer.com/free2/software-qa-testing-test-tester-
2210.html
Grey box testing
DEFINITION
Gray Box Testing is a software testing method which is a combination of Black Box
Testing method and White Box Testing method. In Black Box Testing, the internal structure of
the item being tested is unknown to the tester and in White Box Testing the internal structure
in known. In Gray Box Testing, the internal structure is partially known. This involves having
access to internal data structures and algorithms for purposes of designing the test cases, but
testing at the user, or black-box level.
Gray Box Testing is named so because the software program, in the eyes of the tester is like a
gray/semi-transparent box; inside which one can partially see.
EXAMPLE
An example of Gray Box Testing would be when the codes for two units/modules are studied
(White Box Testing method) for designing test cases and actual tests are conducted using the
exposed interfaces (Black Box Testing method).
LEVELS APPLICABLE TO
Though Gray Box Testing method may be used in other levels of testing, it is primarily useful in
Integration Testing.
Usecases are High Level requirements and it's describes all Funcationalitys as per user point of view.
Test Cases are Low level requirements and it's describes the each step of the funcationality.
Quality:
V model:
Build: When coding level testing over. It is a completely integration tested modules. Then it is
called a build. Build is developed after integration testing. (.exe)
Test Management: Testers maintain some documents related to every project. They will refer
these documents for future modifications.
Assessment of Development Plan
Prepare Test Plan
Information Gathering Requirements Phase Testing
& Analysis
Port Testing
Maintenance Test Software Changes
Test Efficiency
Change Request: The request made by the customer to modify the software.
BBT, UAT and Test management process where the independent testers or testing team will be
involved.
User Information
Invalid User
GUI Testing
It is absolutely essential that any application has to be user-friendly. The end user should be
comfortable while using all the components on screen and the components should also perform
their functionality with utmost clarity.GUI Testing can refer to just ensuring that the look-and-
feel of the application is acceptable to the user, or it can refer to testing the functionality of
each and every component involved.
The following is a set of guidelines to ensure effective GUI Testing and can be used even as a
checklist while testing a product / application.
Unit Testing:
After the completion of design and their reviews programmers are concentrating on coding.
During this stage they conduct program level testing, with the help of the WBT techniques. This
WBT is also known as glass box testing or clear box testing.
WBT is based on the code. The senior programmers will conduct testing on programs WBT is
applied at the module level.
1. Execution Testing
Basis path coverage (correctness of every statement execution.)
Loops coverage (correctness of loops termination.)
Program technique coverage (Less no of Memory Cycles and CPU cycles
during execution.)
2. Operations Testing: Whither the software is running under the customer expected
environment platforms (such as OS, compilers, browsers and etc…sys s/w.)
Unit Tests
Intent:
Uncover errors in an individual unit, often a class
Provide a safety net for aggressive development
Tests should
be black box and white box
be based on unit specifications
be binary Pass/Fail
test usual operation
test unusual (exceptional) operation
detect if the unit fails to do what it should
detect if the unit does what it should not
Integration Testing:
After the completion of unit testing, development people concentrate on integration testing,
when they complete dependent modules of unit testing. During this test programmers are
verifying integration of modules with respect to HLDD (which contains hierarchy of modules).
There are two types of approaches to conduct Integration Testing:
Top-down Approach
Bottom-up approach.
Stub: It is a called program. It sends back control to main module instead of sub module.
Driver: It is a calling Program. It invokes a sub module instead of main module.
Main Stub
Sub Sub
Module Module
1 2
Bottom-Up: This approach starts testing, from lower-level modules. Drivers are used to
connect the sub modules. ( ex login, create driver to accept default uid and pwd)
Mai
n
Driver
Sub
Module
1
Sub
Module
2
Sandwich: This approach combines the Top-down and Bottom-up approaches of the integration
testing. In this middle level modules are testing using the drivers and stubs.
Mai
n
Driver
Sub
Module
1 Stub
Sub Sub
Module Module
2 3
System Testing: After the completion of Coding and that level tests (U & I) development team
releases a finally integrated all modules set as a build. After receiving a stable build from
development team, separate testing team concentrate on functional and system testing with
the help of BBT.
Black box testing treats the system as a "black-box", so it doesn't explicitly use Knowledge of
the internal structure or programming knowledge.
It focuses on the functionality part of the module. Some people like to call black box testing as
behavioral, functional, opaque-box, and closed-box Behavioral test design is slightly different
from black-box test design because the use of internal knowledge isn't strictly forbidden, but
it's still discouraged.
Input Output
Application
There are some bugs that cannot be found using only black box or only white box. If the test
cases are extensive and the test inputs are also from a large sample space then it is always
possible to find majority of the bugs through black box testing.
The basic functional or regression testing tools capture the results of black box tests in a script
format. Once captured, these scripts can be executed against future builds of an application to
verify that new functionality hasn't disabled previous functionality.
BVA focuses on the boundary of the input space to identify test cases
Rational is that errors tend to occur near the extreme values of an input variable
Equivalence partitioning is a black box testing method that divides the input domain of a
program into classes of data from which test cases can be derived.
Equivalence Partitioning
Black-box technique that divides the input domain into classes of data from which test
cases can be derived
An ideal test case uncovers a class of errors that might require many arbitrary test
cases to be executed before a general error is observed
Equivalence class guidelines:
1. If input condition specifies a range, one valid and two invalid equivalence
classes are defined
2. If an input condition requires a specific value, one valid and two invalid
equivalence classes are defined
3. If an input condition specifies a member of a set, one valid and one invalid
equivalence class is defined
4. If an input condition is Boolean, one valid and one invalid equivalence class is
defined
BVA guidelines:
Comparison Testing
Black-box testing for safety critical systems in which independently developed
implementations of redundant systems are tested for conformance to specifications
Often equivalence class partitioning is used to develop a common set of test cases for
each implementation
Orthogonal Array Testing
Black-box technique that enables the design of a reasonably small set of test cases that
provide maximum test coverage
Focus is on categories of faulty logic likely to be present in the software component
(without examining the code)
Priorities for assessing tests using an orthogonal array
Specialized Testing
Graphical user interfaces
Client/server architectures
Documentation and help facilities
Real-time systems
Every testing project has to follow the waterfall model of the testing process.
The waterfall model is as given below
1. Test Strategy & Planning
2. Test Design
6. Final Reporting
According to the respective projects, the scope of testing can be tailored, but the process
mentioned above is common to any testing activity.
Regression Testing
Compatibility Testing: This test is also known as portable testing. During this test, test
engineer validates continuity of our application execution on customer expected platforms
(like OS, Compilers, browsers, etc.)
Forward compatibility:
The application which is developed is ready to run, but the project technology or environment
like OS is not supported for running.
Build OS
Backward compatibility:
The application is not ready to run on the technology or environment.
Build OS
Instal
Build 1. Setup Program
lation Customer Site
+Required
S/w Like
components to Environment 2. Easy Interface
run
application 3. Occupied Disk Space
The following conditions or tests done in this installation process
Occupied Disk Space: How much disk space it is occupying after the installation?
Sanitation Testing: This test is also known as Garbage Testing. During this test, test engineer
finds extra features in your application build with respect to S/w RS.
Maximum testers may not get this type of problems.
Parallel or Comparative testing: During this test, test engineer compares our application build
with similar type of applications or old versions of same application to find competitiveness.
Performance Testing: It is an advanced testing technique and expensive to apply. During this
test, testing team concentrate on Speed of Processing.
1. Load Testing
2. Stress Testing
3. Data Volume Testing
4. Storage Testing
Load Testing:
This test is also known as scalability testing. During this test, test engineer
executes our application under customer expected configuration and load to estimate
performance.
Stress Testing:
During this test, test engineer executes our application build under customer
expected configuration and peak load to estimate performance.
Storage Testing:
Execution of our application under huge amounts of resources to estimate
storage limitations to be handled by our application is called as Storage Testing.
Security Testing: It is also an advanced testing technique and complex to apply.
To conduct this tests, highly skilled persons who have security domain knowledge.
Access Control: Also called as Privileges testing. The rights given to a user to do a system task.
Encryption / Decryption:
Encryption- To convert actual data into a secret code which may not be understandable to
others.
Decryption- Converting the secret data into actual data.
Client Server
User Acceptance Testing: After completion of all possible system tests execution, our
organization concentrate on user acceptance test to collect feed back. To conduct user
acceptance tests, they are following two approaches like Alpha Test and Beta Test.
Note: In s/w development projects are two types based on the products like software
application ( also called as Project ) and Product.
Acceptance tests
Intent:
Will the client pay the developers?
Test against specifications
Often part of a specifications document
Tests should
be user-centered
be based on specifications
be binary Pass/Fail
test usual operation
test unusual (exceptional) operation
detect if the product fails to do what it should
detect if the product does what it should not
Manual Vs Automation: A tester conducts a test on application without using any third party
testing tool. This process is called as Manual Testing. A tester conducts a test with the help of
software testing tool. This process is called as Automation.
For verifying the need for automation they will consider following two types:
Functional testing is not concerned with how quickly or slowly the application does
its job. That would be performance testing.
Functional testing is not directly concerned with how robust the application is.
Functional testing doesn't really care if there are memory leaks unless, of course,
the lack of a robust implementation means the user cannot use the application to do
what it needs to do.
Functional testing is concerned with verifying an application operates as it was
intended. There are various ways to gauge how well an application functions such
as how well it conforms to specifications or requirements. In the simplest terms, you
can ask "Does the application do what the user needs it to do?"
Impact of the test: It indicates test repetition Impact of the test: It indicates test repetition.
Criticality: Load testing, for 1000 users. Criticality indicates complex to apply that test
manually. Impact indicates test repetition.
Retesting: Re execution of our application to conduct same test with multiple test data is
called Retesting.
Regression Testing: The re execution of our test on modified build to ensure bug fix work and
occurrences of side effects is called regression testing.
11 Test Fail
Development
10 Tests Passed
Testing Policy
C.E.O
Company Level
Test Strategy
Test Manager/
QA / PM
Test Methodology
Test Cases
Test Procedure
Project Level
Test Lead, Test Test Script
Engineer
Test Log
Defect Report
Test Lead
Test Summary Report
Report Bugs
Once you execute the manual and automated tests in a cycle, you report the bugs (or defects)
that you detected. The bugs are stored in a database so that you can manage them and analyze
the status of your application.
When you report a bug, you record all the information necessary to reproduce and fix it. You
also make sure that the QA and development personnel involved in fixing the bug are notified.
After a defect has been found, it must be reported to development so that it can be fixed.
The Project Lead of the development team will review the defect and set it to one of
the following statuses:
Open – Accepts the bug and assigns it to a developer.
Invalid Bug – The reported bug is not valid one as per the requirements/design
As Designed – This is an intended functionality as per the requirements/design
Deferred –This will be an enhancement.
Duplicate – The bug has already been reported.
Document – Once it is set to any of the above statuses apart from Open, and the
testing team does not agree with the development team it is set to document status.
Once the development team has started working on the defect the status is set to WIP
((Work in Progress) or if the development team is waiting for a go ahead or some
technical feedback, they will set to Dev Waiting
After the development team has fixed the defect, the status is set to FIXED, which
means the defect is ready to re-test.
On re-testing the defect, and the defect still exists, the status is set to REOPENED,
which will follow the same cycle as an open defect.
If the fixed defect satisfies the requirements/passes the test case, it is set to Closed.
A traceability matrix is created by associating requirements with the products that satisfy
them. Tests are associated with the requirements on which they are based and the product
tested to meet the requirement. Below is a simple traceability matrix structure. There can be
more things included in a traceability matrix than shown below. Traceability requires unique
identifiers for each requirement and product. Numbers for products are established in a
configuration management (CM) plan.
SAMPLE TRACEABILITY MATRIX
The objective of testing is to reduce the risks inherent in computer systems. The strategy must
address the risks and present a process that can reduce those risks. The system concerns on
risks then establish the objectives for the test process. The two components of the testing
strategy are the Test Factors and the Test Phase.
Analysis Coding
Errors 36%
and
design
Errors 64%
Test Factor – The risk or issue that needs to be addressed as part of the test strategy.
The strategy will select those factors that need to be addressed in the testing of a
specific application system.
Test Phase – The Phase of the systems development life cycle in which testing will
occur.
Test Reporting
A final test report should be prepared at the conclusion of each test activity. This includes the
following
Individual Project Test Report
Integration Test Report
System Test Report
Acceptance test Report
Executive Summary
Overview
Application Overview
Testing Scope
Test Details
Test Approach
Types of testing conducted
Test Environment
Tools Used
Metrics
Test Results
Test Deliverables
Recommendations
A test case identifies the specific input values that will be sent to the application, the
procedures
for applying those inputs, and the expected application values for the procedure being tested.
A
proper test case will include the following key components:
Test Case Name(s) - Each test case must have a unique name, so that the results of these test
elements can be traced and analyzed.
Test Case Prerequisites - Identify set up or testing criteria that must be established before a
test can be successfully executed.
Test Case Execution Order - Specify any relationships, run orders and dependencies that might
exist between test cases.
Test Procedures – Identify the application steps necessary to complete the test case.
Input Values - This section of the test case identifies the values to be supplied to the
application as input including, if necessary, the action to be completed.
Expected Results - Document all screen identifier(s) and expected value(s) that must be
verified as part of the test. These expected results will be used to measure the acceptance
criteria, and
therefore the ultimate success of the test.
Test Data Sources - Take note of the sources for extracting test data if it is not included in the
test case.
Responsibilities of a tester