TestStrategy-QA
TestStrategy-QA
TestStrategy-QA
1|Page
Insignia Consultancy Solutions
Amendment History:
Summary of changes in test strategy document:
Reviewers:
This document must be reviewed by the following:
Approvals:
This document must be approved by the following:
Glossary of Terms:
List any terms used in this document.
2|Page
Insignia Consultancy Solutions
Table of Contents
1. Introduction………………………………………………………………………………….4
1.1 Objectives………………………………………………………………………………4
1.2 Scope……………………………………………………………………………………4
2. Types of Testing………………………………………………………………………….…5
2.1 Functionality Testing…………………………………………………………………..5
2.2 Non-Functionality Testing…………………………………………………………….6
3. Testing Overview……………………………………………………………………………7
3.1 Test Life Cycle………………………………………………………………………….7
3.2 Requirement Analysis…………………………………………………………………7
3.3 Test Planning…………………………………………………………………………..8
3.4 Testcase Design……………………………………………………………………….8
3.5 Test Environment Setup………………………………………………………………9
3.6 Test Execution………………………………………………………………………….9
3.7 Test Closure……………………………………………………………………………10
4. Testing Tools and Technology…………………………………………………………....10
5. Process of End-to-end Testing…………………………………………………………...11
5.1 TDD (Test Driven Development) ……………………………………………… …...11
5.2 BDD (Behaviour Driven Development) ……………………………………………..12
16.Summary…………………………………………………………………………………….20
3|Page
Insignia Consultancy Solutions
1. Introduction
Test strategy is a high-level plan consisting of principles that guide the overall software
testing process. It is an approach to verify and validate the projects to the maximum extent.
The important part of test strategy document is to explain how the risks are mitigated, which
test cases are performed in the different test phases, and how the responsibility has been
distributed for testing.
1.1 Objectives
Test Strategy is a set of guidelines that explains test design and determines how testing
needs to be done. It provides a structured approach to the entire QA team, guiding them
toward achieving testing objectives in the most efficient way. It defines the testing approach,
scope, and objectives, clarifies roles, responsibilities, and addresses potential risks and
contingencies. By providing clear guidelines and standards for testing activities, the
document aims to facilitate effective communication, collaboration, and decision-making
among project stakeholders. Ultimately, it ensures that testing efforts are aligned with project
goals, requirements, and industry best practices, leading to the successful delivery of a high-
quality product.
1.2 Scope
It determines the testing tasks and responsibilities, along with their respective timelines.
Identify the individuals responsible for approving, reviewing, and utilizing the test strategy
document. The scope outlined in the testing strategy document encompasses a
comprehensive range of testing activities to ensure thorough evaluation of the product. Key
points include:
Functional Testing: It is a type of software testing that verifies the functionality of a software
system or application. It focuses on ensuring that the system behaves according to the
specified functional requirements and meets the intended business. For example, testing
core functionality of Flipkart is checking user can able to place the order or not.
Local Configuration Testing: Validation of the application's behavior under different local
configurations to ensure consistent performance across environments. For e.g. VPN setup,
API Execution
4|Page
Insignia Consultancy Solutions
By encompassing these testing types, the scope ensures a thorough evaluation of the
product's functionality, usability, performance, security, and compatibility, thereby enhancing
its overall quality and reliability.
2. Types of Testing
2.1 Functional Testing
1.1 Boundary Value Testing: Software testing technique in which tests are
designed to include representatives of boundary values. It is performed by the
QA testing teams.
2. Smoke Testing: Testing technique which examines all the basic components of a
software system to ensure that they work properly. Typically, smoke testing is
conducted by the testing team, immediately after a software build is made.
1.1 Load Testing: Testing technique that puts demand on a system or device and
measures its response. It is usually conducted by the performance engineers.
2. Agile Testing: Software testing practice that follows the principles of the agile
manifesto, emphasizing testing from the perspective of customers who will utilize the
system. It is usually performed by the QA teams.
3. Ad-hoc Testing: Testing performed without planning and documentation – the tester
tries to ‘break’ the system by randomly trying the system’s functionality. It is
performed by the testing team.
6|Page
Insignia Consultancy Solutions
4. API Testing: Testing technique similar to Unit Testing in that it targets the code level.
API Testing differs from Unit Testing in that it is typically a QA task and not a
developer task.
5. Automated Testing: Testing technique that uses Automation Testing tools to control
the environment set-up, test execution and results reporting. It is performed by a
computer and is used inside the testing teams.
6. Error-Handling Testing: Software testing type which determines the ability of the
system to properly process erroneous transactions. It is usually performed by the
testing teams.
8. GUI software Testing: The process of testing a product that uses a graphical user
interface, to ensure it meets its written specifications. This is normally done by the
testing teams.
9. Database Testing: It is a type of software testing that checks the schema, tables,
triggers, etc. of the Database under test. It also checks data integrity and consistency.
It may involve creating complex queries to load/stress test the Database and check
its responsiveness.
7|Page
Insignia Consultancy Solutions
10. Globalization Testing: Testing method that checks proper functionality of the
product with any of the culture/locale settings using every type of international input
possible. It is performed by the testing team.
SonarQube
It collects and analyzes source code, and provides reports for the code quality of your
project. It combines static and dynamic analysis tools and enables quality to be
measured continually over time. Everything from minor styling choices, to design
errors are inspected and evaluated by SonarQube. This provides users with a rich
searchable history of the code to analyze where the code is messing up and
determine whether or not it is styling issues, code defeats, code duplication, lack of
test coverage, or excessively complex code. The software will analyze source code
from different aspects and drills down the code layer by layer, moving module level
down to the class level, with each level producing metric values and statistics that
should reveal problematic areas in the source code that needs improvement.
3. Testing Overview
8|Page
Insignia Consultancy Solutions
The first step in test case design is a thorough understanding of project requirements. QA
professionals collaborate closely with stakeholders, including business analysts, product
owners, and developers, to gain clarity on the desired functionalities and features of the
software. They review project documentation, such as requirement documents, user stories,
and design specifications, to identify testable scenarios and acceptance criteria.
Creating Test Cases: Once test scenarios are identified, QA professionals translate them
into detailed test cases. Test cases outline the steps to be executed, including preconditions,
test inputs, expected outcomes, and post conditions. They also include information on test
data, test environment setup, and any dependencies or assumptions. Test cases are
typically documented in a test case management tool or spreadsheet for easy reference and
execution.
Ensuring Test Coverage: During test case design, QA professionals strive to achieve
maximum test coverage by addressing all aspects of the software's functionality, including
boundary conditions, error handling, and system integrations. They prioritize test cases
based on the criticality of features, risks, and business priorities to optimize testing efforts
and resources.
Review and Validation: Once test cases are created, they undergo review and validation to
ensure accuracy, completeness, and relevance. QA professionals collaborate with team
members, including peers and subject matter experts, to review test cases for clarity,
correctness, and alignment with requirements. Any identified issues or discrepancies are
addressed promptly to maintain the quality of the test cases.
Documentation and Maintenance: Finally, test cases are documented in a test repository
along with relevant metadata, such as test case ID, description, priority, and status. Test
cases are regularly updated and maintained to reflect changes in requirements,
functionalities, or system behavior. QA professionals continuously refine and enhance test
9|Page
Insignia Consultancy Solutions
cases to adapt to evolving project needs and ensure effective test coverage throughout the
software development lifecycle.
In summary, test case design is a systematic and iterative process that involves translating
requirements into detailed test cases to validate the software's behavior. By following best
practices in test case design, QA professionals ensure comprehensive test coverage and
contribute to the delivery of high-quality software products
The test environment decides the conditions on which software is tested. This is
independent activity and can be started along with test case development. In this process,
the testing team is not involved. either the developer or the customer creates the testing
environment.
3.6 Test Execution
After the test case development and test environment setup test execution phase gets
started. In this phase testing team starts executing test cases based on prepared test
cases in the earlier step.
Defect logging: Any defects or issues that are found during test execution are
logged in a defect tracking system, along with details such as the severity,
priority, and description of the issue.
Test data preparation: During form fill, table data add, credentials and in many
places test data is required. It should be prepared and loaded into the separate
system and can be used in test execution
Test result analysis: The results of the test execution are analysed to
determine the software’s performance and identify any defects or issues.
Defect retesting: Once developer has fixed the defects then during again
process of test execution it should be retested to ensure that they have been
fixed correctly.
10 | P a g e
Insignia Consultancy Solutions
Test Reporting: Test results are documented and reported to the relevant
stakeholders.
Test summary report: A report is created that summarizes the overall testing
process, including the number of test cases executed, the number of defects
found, and the overall pass/fail rate.
Defect tracking: All defects that were identified during testing are tracked and
managed until they are resolved.
Test environment clean-up: The test environment is cleaned up, and all test
data and test artifacts are archived.
Test closure report: A report is created that documents all the testing-related
activities that took place, including the testing objectives, scope, schedule, and
resources used.
Knowledge transfer: Knowledge about the software and testing process is
shared with the rest of the team and any stakeholders who may need to
maintain or support the software in the future.
Feedback and improvements: Feedback from the testing process is collected
and used to improve future testing processes
11 | P a g e
Insignia Consultancy Solutions
Tool/Technology Description
A comprehensive API testing tool with features for testing, debugging, and
Postman monitoring APIs.
A Java library for testing RESTful APIs, offering a fluent API and easy
RestAssured integration with testing frameworks.
Tool/Technology Description
An open-source automation tool for testing mobile, native, and hybrid apps
Appium across different platforms.
Apple's native testing framework for writing unit, performance, and UI tests
XCTest for iOS and macOS apps.
These tools and technologies are widely used in the industry for various testing purposes,
catering to different aspects of software development and ensuring the quality and reliability
of applications across different platforms and environments.
12 | P a g e
Insignia Consultancy Solutions
1. Write Test cases: Test case prepared by testing team before developing the functionality.
2. Run the Test: Execute the test suite. This step ensures that the test is valid and actually
tests the behaviour intended.
3. Write the code: Implement the functionality or behaviour in the simplest way possible and
writing the minimal code required to make the test pass.
4. Run all tests: Once the code is written, run the entire test suite, including all previously
written tests as well as the new one. This ensures that the new code didn't break any
existing functionality.
5. Refactor code: Refactor the code if necessary to improve its design, readability, or
performance. Refactoring should not change the external behaviour of the code; the tests
should still pass after refactoring.
6. Repeat: Repeat the process for each new functionality or behaviour that needs to be
implemented.
By following these steps, developers can ensure that their code is thoroughly tested and that
new features or changes don't introduce unintended side effects or regressions.
13 | P a g e
Insignia Consultancy Solutions
In BDD, whatever we write must go into Given-When-Then steps called Gherkin language.
1. Formulation of Scenarios: Scenarios are typically written in a structured format using
“Given-When-Then (GWT)” syntax to describe the initial context, the action being
performed, and the expected outcomes.
2. Automation of Scenarios: Once scenarios are defined, the testing team automates them
using BDD testing frameworks such as Cucumber BDD.
3. Implementation of Step Definitions:
For each scenario, the corresponding step definitions are implemented in code.
Step definitions are written in “Java programming” language and map the plain-text
scenario steps to executable code that interacts with the system under test.
4. Execution of Scenarios:
The automated scenarios are executed against the system under test to verify that
it behaves as expected.
Scenarios can be executed locally during development and as part of continuous
integration (CI) pipelines to ensure that new changes don't introduce regressions.
Tool, we use for execution of scenarios – Jenkins.
5. Review & Feedback:
After execution, the team reviews the test results and identifies any failures or issues.
Failed scenarios are investigated, and necessary fixes or adjustments are made to
the implementation or the scenarios themselves.
Review & feedback tool: JIRA.
6. Continuous Refinement:
The scenarios and step definitions are continuously refined and updated as the
requirements evolve or new features are added.
Updated code shared with other testing team so that they can merge with their
code.
Code repo management tool: GIT.
By following this step-by-step workflow, teams can leverage BDD to ensure that software
requirements are clearly defined, understood, and validated through automated tests.
7. Defect management
Based on the severity levels of defects reported, issues are being resolved in defect
management. By Considering the cost, risk, and benefits associated with each defect to
decide whether it should be fixed or withheld.
Critical Defect: The Defect affects critical functionality or causes critical system
crashes or data loss.
High Defect: The defect affects some major areas of functionality but not in whole.
Medium Defect: The defect affects minor functionality and non-critical data.
Low Defect: The defect does not affect functionality or data. It does not have any
impact on productivity and efficiency.
6.2 Defect Reporting
15 | P a g e
Insignia Consultancy Solutions
Classification: Defects are classified based on their severity, priority, and impact on
the software's functionality. Common severity levels include critical, major, minor, and
cosmetic, while priority levels range from high to low based on the urgency of
resolution.
Reporting: Once documented and classified, defect reports are submitted to the
defect tracking system or bug repository. This centralized repository serves as a
record of all reported defects, providing visibility to stakeholders and facilitating
tracking and management of defect resolution efforts.
By following a systematic defect reporting process, QA team can effectively manage and
resolve issues identified during testing, ultimately delivering high-quality software products
that meet user expectations and business requirements.
16 | P a g e
Insignia Consultancy Solutions
9. Test Environment
The test environment setup should outline information about the number of environments
and the required setup for each environment. For example, one test environment for the
functional test team and another for the UAT team. Define the number of users supported in
each environment, access roles for each user, software and hardware requirements like
operating system, memory, free disk space, number of systems, etc.
10.Release Control
Release Control is used to make sure that test execution and release management
strategies are established systematically. This contains information related to the
successive updates of the product. Release control ensures the software meets the
necessary standards before it is delivered to the User. Release Control must include the
history of Test Processes, Teams responsible for each case, Main components,
Modification done, the first error encountered, and the measures taken to overcome.
Planning: Creating a Detailed plan for the steps involved in the Releasing process
such as deployment schedules, testing activities, and backup procedures in case of
errors.
Deployment and Rollback: According to the release plan, deploying the product at
the scheduled time in coordination with the development team.
17 | P a g e
Insignia Consultancy Solutions
18 | P a g e
Insignia Consultancy Solutions
Backup Procedures: Set a regular backup schedule for frequently doing tasks or for
Most Important Data. Depending upon the requirements backups can be chosen
either full or Differential or Incremental.
Backup Storage: Decide where to store backups, and ensure that a secure and
reliable medium is chosen. Cloud-based storage solutions give more protection, since
the chance for physical damage or loss of data is less.
Backup Validation: Validating the backups by performing periodic test to restore
data and ensure data restored successfully.
Documentation: Documenting the detailed procedures involved in the recovery of
test environment and backup data in the event of failures and data loss can be useful
for further reference.
Regular review and Testing: According to the business requirements and change in
test environments periodically update backup and recovery plans. perform recovery
testing often.
Widely used automation tool for web application testing, supporting various
Playwright programming languages.
In an environment that prioritizes short release cycle, it is critical for all stakeholders to
understand what is necessary to get the job done. If the QA team doesn’t communicate their
vision for quality testing through a test strategy, there will be gaps in communication before
the project ever get started. By following a unified Test Strategy Document, organization can
avoid such difficulties. Major Benefits of a good documentation is listed below.
Transparency: Comprehensive documentation and reporting ensure transparency and
visibility into testing activities, enabling stakeholders to track progress and make informed
decisions.
Traceability: Well-documented testing artifacts facilitate traceability between requirements,
test cases, test results, and defects, ensuring alignment with project objectives.
Communication: Regular test reports facilitate effective communication between QA teams,
development teams, project managers, and other stakeholders, fostering collaboration and
alignment of efforts.
Decision Making: Insights provided through documentation and reporting enable
stakeholders to make informed decisions regarding software quality, release readiness, and
risk mitigation strategies.
16.Summary
This document contains the unified test Strategy for the QA Team. The Following Information
has been covered in this document.
Definition of Test Strategy.
Levels of Testing and Tools and Environment Used for Testing.
Software Testing Life Cycle
Method of Defect Management, Risk Mitigation and Backup Plans.
Communication and Responsibilities
Review and Approval Methods.
Benefits of Test Strategy
20 | P a g e
Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.
Alternative Proxies: