ST Sem Answers
ST Sem Answers
ST Sem Answers
Quality is a complex concept—it means different things to different people, and it is highly
context dependent.
User View: It perceives quality as fitness for purpose. According to this view, while
evaluating the quality of a product, one must ask the key question: “Does the product satisfy
user needs and expectations?”
Product View: In this case, quality is viewed as tied to the inherent characteristics of the
product.
Value-Based View: Quality, in this perspective, depends on the amount a customer is willing
to pay for it.
follows: factorial(0) = 1;
factorial(1) = 1;
factorial(n) = n * factorial(n-1);
1.8.4: Operational Profile:
⮚ As the term suggests, an operational profile is a quantitative characterization
of how a system will be used.
⮚ It was created to guide test engineers in selecting test cases (inputs) using
samples of system usage.
Sure, black box testing is like solving a puzzle without knowing what's inside the box. Imagine you
have a closed box, and you can only interact with it through certain inputs, but you can't see what
happens inside. So, you try different inputs and observe the outputs to make sure the box works
correctly.
1. **Graph-based testing:** This involves creating a sort of map that shows the different parts of
the system and how they're connected. Think of it like drawing circles for different parts and lines
between them to show how they relate. Then, you test these connections to ensure they work
properly.
a) Unidirectional: it represents only one direction.
b) Bidirectional: a link that represents both directions.
c) Undirected: a link that does not represent any direction.
d) Parallel: used to represent more than one link (i.e.) multiple links.
✔ There are 2 properties of node and link
a) Node weight: represents the properties of node.
b) Link weight: represents the characteristics of link
2. **Equivalence partitioning:** This method organizes input data into groups. For instance, if a
program accepts numbers from 1 to 1000, instead of testing every single number, you'd categorize
them into three groups: numbers from 1 to 1000, numbers below 1, and numbers above 1000.
Testing a few numbers from each group helps cover all possibilities.
3. **Boundary value analysis:** This is like testing the edges of what the system can handle. For
instance, if a program accepts numbers from 1 to 1000, you'd test the numbers 1, 1000, and a
couple just below and above those limits. Errors often show up at the boundaries, so this helps
catch those issues.
4. **Orthogonal array testing (OAT):** This is a more organized way of testing when dealing with
complex systems. Instead of testing every possible combination of inputs, OAT helps pick out a
smaller number of tests that cover most scenarios efficiently. For example, if you have three things
to test, each with three options, instead of doing 27 tests (3 x 3 x 3), OAT can figure out nine tests
that cover a lot of ground.
Overall, black box testing is about checking if the system behaves as expected without needing to
know its internal workings. It's like being a detective, figuring out how the pieces fit together and
making sure everything works smoothly.
5)List & Explain the Role of testing in detail?
ROLE OF TESTING
⮚ Testing plays an important role in achieving and assessing the quality of software
Product.
⮚ Software quality assessment can be divided into two broad categories, namely,
static analysis and dynamic analysis.
Static Analysis:
⮚ As the term “static” suggests, it is based on the examination of a number of
documents, namely requirements documents, software models, design documents, and
source code.
⮚ Traditional static analysis includes code review, inspection, walk-through,
algorithm analysis, and proof of correctness.
⮚ It does not involve actual execution of the code under development. Instead, it
examines code and reasons over all possible behaviours that might arise during run time.
Compiler optimizations are standard static analysis.
Dynamic Analysis:
Dynamic analysis of a software system involves actual program execution in order to
expose possible program failures.
⮚ The behavioural and performance properties of the program are also observed.
⮚ Programs are executed with both typical and carefully chosen input values.
⮚ Often, the input set of a program can be impractically large.
⮚ However, for practical considerations, a finite subset of the input set can be selected.
Static testing is a software testing method that examines a program -- along with any
associated documents -- but does not require the program to be executed. Dynamic testing,
the other main category of software testing, requires testers to interact with the program while
it runs. The two methods are frequently used together to ensure the basic functionalities of a
program.
Instead of executing the code, static testing is a process of checking the code and designing
documents and requirements before it's run to find errors. The main goal is to find flaws in
the early stages of development because it is normally easier to find the sources of possible
failures this way.
It's common for code, design documents and requirements to be static tested before the
software is run to find errors. Anything that relates to functional requirements can also be
checked. More specifically, the process will involve reviewing written materials that provide
a wider view of the tested software application as a whole. Some examples of what's tested
include the following:
● requirement specifications
● design documents
● user documents
● webpage content
● source code
● Informal. Informal reviews will not follow any specific process to find errors.
Co-workers can review documents and provide informal comments.
● Walk-through. The author of the document in question will explain the document to
their team. Participants will ask questions and write down any notes.
● Technical or peer reviews. Technical specifications are reviewed by peers to detect any
errors.
Dynamic Testing
It is a type of Software Testing which is performed to analyze the dynamic behavior of the
code. It includes the testing of the software for the input values and output values that are
analyzed. Dynamic Testing is basically performed to describe the dynamic behavior of code.
It refers to the observation of the physical response from the system to variables that are not
constant and change with time. To perform dynamic testing the software should be compiled
and run. It includes working with the software by giving input values and checking if the
output is as expected by executing particular test cases which can be done with either
manually or with automation process. In 2 V’s i.e., Verification and Validation, Validation is
Dynamic Testing.
Levels of Dynamic Testing
There are various levels of Dynamic Testing. They are:
● Unit Testing
● Integration Testing
● System Testing
● Acceptance Testing
There are several levels of dynamic testing that are commonly used in the software
development process, including:
10 System testing: System testing is the process of testing the entire software
system to ensure that it meets the specified requirements and is working as
intended. This level of testing typically involves testing the software’s
functionality, performance, and usability.
1. **Incomplete Requirements:** When the requirements for the software are not clear or are
constantly changing, it becomes challenging to create accurate test cases. Without a clear
understanding of what the software is supposed to do, testing becomes less effective.
2. **Time and Budget Constraints:** Often, there's pressure to complete testing within tight
deadlines or limited budgets. This might lead to rushed testing, cutting corners, or inadequate
coverage, risking the quality of the final product.
3. **Lack of Resources:** Testing requires skilled professionals, tools, and infrastructure.
Sometimes there might be a shortage of skilled testers, insufficient testing environments, or
outdated tools, hampering the testing process.
4. **Complexity of Systems:** Modern software systems are complex, with intricate interactions
between various components. Testing such systems comprehensively becomes a daunting task, and
ensuring all scenarios are covered becomes increasingly challenging.
5. **Dependency on Human Judgment:** Testing involves human judgment, which can introduce
biases or overlook certain issues. Testers might miss edge cases or make assumptions that lead to
overlooking critical defects.
6. **Regression Testing:** As software evolves and new features are added or bugs are fixed,
regression testing becomes essential to ensure that changes haven't introduced new issues
elsewhere in the system. This can be time-consuming and resource-intensive.
7. **Environment and Data Management:** Testing requires specific environments that mimic the
real-world conditions where the software will be used. Managing these environments and ensuring
accurate and relevant test data can be complex.
8. **Automated Testing Challenges:** While automated testing can improve efficiency, creating
and maintaining automated test scripts requires time and effort. Maintenance becomes crucial as
the software evolves, and changes in the application can cause automated tests to become outdated.
10. **Defect Tracking and Management:** Managing a large number of identified defects,
prioritizing them, and ensuring they get addressed appropriately is a significant challenge in
testing. Without a robust system for defect tracking, some issues might slip through the cracks.
TESTING ACTIVITIES:
In order to test a program, a test engineer must perform a sequence of testing activities. Most of
these activities have been shown in Figure 1.6 and are explained in the following. These
explanations focus on a single test case.
Identify an objective to be tested:
The first activity is to identify an objective to be tested. The objective defines the intention, or
purpose, of designing one or more test cases to ensure that the program supports the objective.
A clear purpose must be associated with every test case .
: Select inputs:
The second activity is to select test inputs. Selection of test inputs can be based on the requirements
specification, the source code, or our expectations.
Test inputs are selected by keeping the test objective in mind.
8 )Explain the concepts of unit, integration, system, acceptance, and regression testing
Absolutely, let's take a deep dive into each level of software testing:
Each level of testing plays a critical role in ensuring the software's reliability, functionality, and
alignment with user expectations before it is released or deployed.
9)Give brief explanation about verification and validation
VERIFICATION AND VALIDATION
1.3 Verification:
⮚ This kind of activity helps us in evaluating a software system by
determining whether the product of a given development phase
satisfies the requirements established before the start of that phase.
Validation:
⮚ Activities of this kind help us in confirming that a product meets its
intended use.
⮚ Validation activities aim at confirming that a product meets its
customer’s expectations.
1.4 OBJECTIVES OF TESTING:
⮚ The stakeholders in a test process are the programmers, the test engineers, the
project managers, and the customers.
⮚ A stakeholder is a person or an organization who influences a system’s behaviors
or who is impacted by that system.
⮚ Different stakeholders view a test process from different perspectives as explained
below:
1.4.1 It does work:
⮚ While implementing a program unit, the programmer may want to test whether or
not the unit works in normal circumstances.
⮚ The programmer gets much confidence if the unit works to his or her satisfaction.
The same idea applies to an entire system as well—once a system has been
integrated, the developers may want to test whether or not the system performs the
basic functions.
⮚ Here, for the psychological reason, the objective of testing is to show that the
system works, rather than it does not work.
1.5.2 It does not work:
⮚ Once the programmer (or the development team) is satisfied that a unit (or the
system) works to a certain degree, more tests are conducted with the objective of
finding faults in the unit (or the system).
⮚ Here, the idea is to try to make the unit (or the system) fail.
1.4.3. Reduce the risk of failure:
⮚ Most of the complex software systems contain faults, which cause the system
to fail from time to time.
⮚ This concept of “failing from time to time” gives rise to the notion of failure
rate.
⮚ As faults are discovered and fixed while performing more and more tests, the
failure rate of a system generally decreases.
⮚ Thus, a higher level objective of performing tests is to bring down the risk of
failing to an acceptable level.
1.4.4: Reduce the cost of testing:
⮚ The different kinds of costs associated with a test process include
⮚ The cost of designing, maintaining, and executing test cases,
⮚ The cost of analysing the result of executing each test case,
⮚ The cost of documenting the test cases, and
⮚ The cost of actually executing the system and documenting it.
10)Write short notes about Test Planning and Design, & how will Monitoring and Measuring
Test Execution
### Test Planning and Design:
**Test Planning:**
- **Purpose:** To organize for test execution by defining the framework, scope, resources needed,
effort required, schedule, and budget.
- **Components:**
- **Framework:** Set of ideas or circumstances guiding the tests.
- **Scope:** Domain or extent of test activities, covering managerial aspects.
- **Details:** Outlining resource needs, effort, activity schedules, and budget.
**Test Design:**
- **Purpose:** To critically study system requirements, identify testable system features, define
test objectives, and specify test case behavior.
- **Steps Involved:**
- Critical study of system requirements.
- Identification of testable system features.
- Defining test objectives based on requirements and functional specifications.
- Designing test cases for each test objective.
- Creating modular test components called test steps within test cases.
- Combining test steps to form complex, multistep tests.
- Ensuring clear and understandable test case specifications for ease of use and reuse.
**Bottom Line:** Metrics gathered during test execution provide critical insights that aid in
decision-making, allowing management to make informed choices for effective project control,
quality improvement, and cost reduction.
UNIT-2
11)Explain the Concept of Integration Testing
**Challenges in Integration:**
- **Interface Errors:** Assembling a complete system from modules encounters interface errors
due to their interconnections.
- **Stability Testing:** Creating a stable system from components requires extensive testing.
- **Phases in Building a Deliverable System:** Integration testing and system testing are key
phases in constructing a deliverable system.
**Importance of Interfaces:**
- **Facilitate Functionality Realization:** Modules interface to realize functional requirements and
allow one module to access services from another.
- **Control and Data Transfer:** Interfaces establish mechanisms for passing control and data
between modules.
Addressing these issues requires meticulous attention to detail, clear communication among
development teams, adherence to design specifications, robust testing practices, and rigorous code
reviews. By understanding these potential pitfalls, software developers and project managers can
proactively mitigate risks during the development lifecycle, leading to more robust and reliable
software systems.
14)Explain in detail the various System Integration Techniques with suitable example?
1. **Top-Down Integration:**
- Create a hierarchical structure with modules at different levels.
- Start with the top-level module at the highest position.
- Draw arrows downwards, connecting top-level modules to their lower-level modules.
- Show integration happening incrementally by adding modules and connections in subsequent diagrams
(Fig 2.1 to Fig 2.7).
Utilize diagrams with shapes representing modules (boxes or circles) and arrows indicating the integration
flow. Each diagram should build upon the previous one, showing the incremental or hierarchical integration
process as described in the text explanations.
The tester knows the input to the black box and observes the expected outcome of the execution.
White-box testing uses information about the structure of the system to test its correctness.
It takes into account the internal mechanisms of the system and the modules.
Intrasystem Testing:
This form of testing constitutes low-level integration testing with the objective of combining the modules
together to build a cohesive system.
The process of combining modules can progress in an incremental manner akin to constructing and testing
successive builds,
Intersystem Testing:
Intersystem testing is a high-level testing phase which requires interfacing independently tested systems.
In this phase, all the systems are connected together, and testing is conducted from end to end.
The term end to end is used in communication protocol systems, and end- to-end testing means initiating a
test between two access terminals interconnected by a network.
The purpose in this case is to ensure that the interaction between the systems work together, but not to
conduct a comprehensive test.
Pairwise Testing:
There can be many intermediate levels of system integration testing between the above two extreme levels,
namely intrasystem testing and intersystem testing.
Pairwise testing is a kind of intermediate level of integration testing.
In pairwise integration, only two interconnected systems in an overall system are tested at a time.
The purpose of pairwise testing is to ensure that two systems under consideration can function together,
assuming that the other systems
The plan focuses on meticulous testing at various integration levels, ensuring seamless interaction between
modules, and evaluating the system's endurance and robustness under various stress conditions. It also
outlines precise entry and exit criteria for each phase, maintaining quality control throughout the integration
process.
4. **Thermal Tests:**
- These tests evaluate how the system performs under varying temperature and humidity conditions.
- Hardware components are subjected to different temperature and humidity cycles, simulating real-world
conditions.
- Thermal sensors on heat-generating components monitor whether they exceed their specified operating
temperatures.
- Thermal shock tests replicate sudden, extreme temperature changes to assess the system's resilience.
5. **Environmental Tests:**
- Simulates real-world stress conditions like vibrations, shocks, extreme temperatures, and other
environmental factors.
- It's vital to ensure that external factors like vibrations from machinery or environmental elements like
smoke and sand do not adversely affect the system's performance or cause damage.
7. **Acoustic Test:**
- Measures the system's noise emission levels to ensure compliance with safety regulations.
- Avoids excessive noise that could affect personnel working in proximity to the system.
8. **Safety Test:**
- Focuses on identifying and eliminating potential hazards posed by the system to users or the
environment.
- Ensures that components like batteries do not leak dangerous substances and adhere to safety standards.
9. **Reliability Test:**
- Assesses the likelihood of hardware components failing over their operational lifespan.
- MTBF, or Mean Time Between Failures, is calculated to estimate the expected lifespan of components
before failure.
These comprehensive hardware tests are integral to ensuring that the hardware components are reliable,
durable, safe, and compliant with regulatory standards before integrating them into the larger system
involving software components. This rigorous testing mitigates risks and ensures the system's stability and
reliability in real-world conditions.
1. Efficiency and Speed: Automated integration tests can be executed quickly and consistently.
They can cover a wide range of scenarios and functionalities within the system, ensuring that
various components work together as expected. This speed allows for faster verification of
the daily build process, identifying issues promptly.
2. Reliability: Automation eliminates human error and ensures that tests are executed
consistently. This reliability helps in accurately detecting integration issues that might arise
due to changes in the codebase.
3. Comprehensive Testing: Integration tests can encompass multiple modules or components of
a system simultaneously, checking how they interact and function together. Automated tests
can cover various integration points, which might be impractical or time-consuming to test
manually.
4. Regression Testing: With each new daily build, automated integration tests can be rerun
efficiently to check whether the changes introduced have impacted the existing functionalities
adversely. This helps catch regression issues early in the development cycle.
5. Continuous Monitoring: Automated tests can be integrated into Continuous
Integration/Continuous Deployment (CI/CD) pipelines, ensuring that integration tests are run
with each build. This continuous monitoring helps in early identification of integration issues,
promoting a more stable daily build process.
6. Faster Feedback Loop: Automated tests provide quick feedback on the status of the
integration process. If any integration failures occur, they can be reported immediately,
enabling developers to address them promptly before they escalate.
7. Documentation and Traceability: Automated integration tests serve as a form of
documentation for how different components should interact. They also provide traceability,
allowing developers to track changes and understand how modifications might affect the
integration.
20)Describe the circumstances under which you would apply white-box testing, back- box testing, or
both techniques to evaluate a COTS component.
When evaluating a Commercial Off-The-Shelf (COTS) component, the choice between white- box and
black-box testing (or a combination of both) depends on several factors, including access to the component's
internal structure, available documentation, and the testing objectives. Here's how each technique might be
applied:
White-box Testing:
1.
Access to Internal Structure:
If you have access to the source code or internal architecture of
the COTS component, white-box testing can be highly effective. It involves understanding the
Black-box Testing:
.
Limited Access to Internal Structure:
When the COTS component's internal structure is
proprietary or inaccessible, black-box testing becomes the primary option. Testers rely on the software's
external behavior, specifications, and requirements without knowledge of its internal workings.
Functional Validation: Black-box testing is ideal for evaluating the COTS component against its specified
functionalities, intended use cases, and documented requirements. This method focuses on inputs, outputs,
and system behavior rather than internal mechanisms.
User Perspective Testing: If the goal is to evaluate the component from an end-user
perspective without delving into its internal implementation details, black-box testing is more suitable. It
helps simulate real-world usage scenarios and identify usability issues.
.
When Partial Access is Available:
In some cases, testers might have partial access to the
internal structure or some knowledge of the component's architecture. This situation can lead to a combined
approach where both white-box and black-box techniques are employed (gray- box testing).
2. Comprehensive Testing: Using both techniques in conjunction allows for a more comprehensive
evaluation. White-box testing can focus on specific critical areas or customizations, while black-box testing
ensures that the component functions correctly according to its specifications.
UNIT-3
21)Discuss the Different type Of basic Tests in detail with suitable Example
The basic tests are fundamental assessments carried out to ascertain the preliminary functioning of a
system's key features. They are designed to provide an initial overview of system readiness without delving
into exhaustive testing. Let's elaborate on each of these tests:
Each of these tests plays a vital role in ensuring different aspects of a system's functionality and
performance, contributing to a comprehensive evaluation before more rigorous testing phases.
Each type of robustness test is designed to probe specific scenarios that might challenge the system's
stability, resilience, or recovery capabilities. These tests are essential to ensure that the system operates
reliably, even when facing unexpected or adverse conditions in its environment.
23)Discuss in detail about the Characteristics of Testable Requirements
These characteristics collectively ensure that the specified requirements are not just comprehensive and
understandable but also practical for testing, implementation, and system validation. They form the
backbone of a system's reliability and adaptability, enabling teams to build systems that are not only
functional but also resilient to changes and errors.
1. **Purpose:**
- **Performance under Load:** To measure how the system behaves under normal and extreme load
conditions.
1. **Objective:**
- **Continuous Operation:** To ensure the system functions without failure over time.
2. **Testing Parameters:**
- **Memory Usage:** Monitoring memory consumption during operation.
- **CPU Efficiency:** Assessing CPU performance and resource utilization.
- **Transaction Responses:** Verifying the system's responsiveness.
- **Disk Space Checks:** Ensuring adequate disk space availability.
Load and stability testing are crucial phases in software development as they help identify potential issues,
bottlenecks, and failure points in applications or systems under real-world conditions. These tests ensure
that systems can handle expected user loads without crashing or significant performance degradation over
time.
DOCUMENTATION TESTS
Documentation testing means verifying the technical accuracy and readability of the user manuals,
including the tutorials and the on-line help.
Documentation testing is performed at three levels as explained in the following:
Read Test: In this test a documentation is reviewed for clarity, organization, flow, and accuracy without
executing the documented instructions on the system.
Hands-On Test: The on-line help is exercised and the error messages verified to evaluate their accuracy and
usefulness.
Functional Test: The instructions embodied in the documentation are followed to verify that the system
works as it has been documented.
The following concrete tests are recommended for documentation testing:
Verify that the glossary accompanying the documentation uses a standard, commonly accepted terminology
and that the glossary correctly defines the terms.
Verify that there exists an index for each of the documents and the index block is reasonably rich and
complete. Verify that the index section points to the correct pages.
Verify that there is no internal inconsistency within the documentation.
Verify that the on-line and printed versions of the documentation are same.
Verify the installation procedure by executing the steps described in the manual in a real environment.
Verify the troubleshooting guide by inserting error and then using the guide to troubleshoot the error.
Verify the software release notes to ensure that these accurately describe (i) the changes in features and
functionalities between the current release and the previous ones and (ii) the set of known defects and their
impact on the customer.
Verify the on-line help for its
usability
integrity
usefulness of the hyperlinks and cross-references to related topics
effectiveness of table look-up
accuracy and usefulness of indices.
Verify the configuration section of the user guide by configuring the system as described in the
documentation.
Finally, use the document while executing the system test cases. Walk through the planned or existing user
work activities and procedures using the documentation to ensure that the documentation is consistent with
the user work.
Documentation Testing?
Documentation Testing involves testing of the documented artifacts that are usually developed before or
during the testing of Software.
Documentation for Software testing helps in estimating the testing effort required, test coverage,
requirement tracking/tracing, etc. This section includes the description of some commonly used documented
artifacts related to Software development and testing, such as:
Test Plan
Requirements
Test Cases
Traceability Matrix
Regression testing aims to maintain software reliability by ensuring that modifications or enhancements to
the codebase don't inadvertently introduce new bugs or issues. The choice of test cases and timing of
regression testing play a critical role in detecting potential regressions while ensuring the stability and
reliability of the software.
27)Discuss the Different type Of Functionality Tests in detail with suitable Example
Functionality testing is a critical aspect of software validation, aimed at verifying that a system performs in
alignment with specified requirements. It covers various facets and subgroups to thoroughly examine and
validate different functionalities of the system. Let's break down each subgroup:
These different types of functionality tests address various aspects of the system, from its basic connections
and module functionalities to complex areas such as security and GUI usability. Each test subgroup aims to
ensure that the software behaves as expected, meeting all specified requirements and providing a seamless
user experience.
28)Explain in detail the Reliability testing & Interoperability testing in detail?
Certainly! Let's delve into detailed explanations of both Reliability Testing and Interoperability Testing:
**Definition:** Reliability Testing is a type of software testing aimed at assessing the system's ability to
function consistently and reliably over a prolonged period under specific conditions. It measures the
system's ability to remain operational without failure.
**Objective:** The primary objective of Reliability Testing is to identify defects or issues that can lead to
system failures over time, thus ensuring the system's consistent performance without breakdowns.
1. **Stress Testing:** Subjecting the system to high loads or usage to identify performance bottlenecks.
2. **Endurance Testing:** Running the system continuously for an extended period to uncover issues that
may occur over time.
3. **Recovery Testing:** Evaluating the system's ability to recover from failures or crashes.
4. **Load Testing:** Checking the system's ability to handle simultaneous users or transactions.
5. **Volume Testing:** Assessing the system's capability to handle large data volumes.
6. **Soak Testing:** Similar to endurance testing, focusing on stability under a normal load over an
extended time.
7. **Spike Testing:** Subjecting the system to sudden, unexpected increases in load.
- **Mean Time Between Failures (MTBF):** The average time elapsed between failures.
- **Mean Time To Failure (MTTF):** The average time until the system encounters a failure.
- **Mean Time To Repair (MTTR):** The average time taken to fix failures.
1. **Correctness:** Ensures that the software functions correctly as per the requirements.
2. **Negative Testing:** Checking for what the system should not do.
3. **User Interface:** Verifying the interface elements for usability and design.
4. **Usability:** Testing how suitable the software is for users to achieve goals.
5. **Performance:** Evaluating system speed, load handling, stress resistance, etc.
6. **Security:** Assessing the system's ability to protect data and maintain functionality.
7. **Integration:** Testing the combined components' behavior after integration.
8. **Compatibility:** Ensuring the application works across different environments or devices.
**Definition:** Interoperability Testing assesses the system's ability to interoperate or work with third-party
products or systems seamlessly. It ensures that different systems can communicate and operate together
without compromising their unique functionalities.
**Objective:** The main goal of Interoperability Testing is to validate that systems can communicate
effectively without affecting their independent functionalities.
1. **Compatibility and Integration:** Ensuring systems can connect and work together seamlessly.
2. **Data Exchange:** Verifying the accurate and secure transfer of data between systems.
3. **Configuration Testing:** Testing reconfigurable aspects during interoperability.
❖ A feature is a set of related requirements. The test design activities must be performed
Negative : In this factor we can check what the product it is not supposed to do.
User Interface : In UI testing we check the user interfaces. For example in a web page we
may check for a button. In this we check for button size and shape. We can also check the
navigation links.
Usability : Usability testing measures the suitability of the software for its users, and is
directed at measuring the following factors with which specified users can achieve specified
goals in particular environments.
1. Effectiveness : The capability of the software product to enable users to achieve
specified goals with the accuracy and completeness in a specified context of use.
Performance testing can serve various purposes. It can demonstrate that the system needs
performance criteria.
1. Load Testing: This is the simplest form of performance testing. A load test is usually
conducted to understand the behavior of the application under a specific expected load.
2. Stress Testing: Stress testing focuses on the ability of a system to handle loads beyond
maximum capacity. System performance should degrade slowly and predictably without
failure as stress levels are
increased.
3. Volume Testing: Volume testing belongs to the group of non-functional values tests.
Volume testing refers to testing a software application for a certain data volume. This
volume can in generic terms be the database size or it could also be the size of an
interface file that is the subject of volume testing.
Security : Process to determine that an Information System protects data and maintains
functionality as intended. The basic security concepts that need to be covered by security
testing are the following:
1. Confidentiality : A security measure which protects against the disclosure of
information to parties other than the intended recipient that is by no means the only way
of ensuring
2. Integrity: A measure intended to allow the receiver to determine that the information
which it receives has not been altered in transit other than by the originator of the
information.
30. How will you identify Requirement for develop any kind of applications and
explain various states available for selecting a requirement?
The provided information gives a comprehensive view of the Requirement Identification process, focusing
on the life cycle of requirements within an organization. Here's a breakdown of the key elements discussed:
This structured approach ensures that requirements are captured, reviewed, tested, and verified
systematically to avoid misinterpretations, meet user expectations, and facilitate smooth system
development and delivery.
UNIT-4
31)Illustrate the various Stages that can be adapted on framing Structure of a System Test Plan?
Structure of a System Test Plan
A good plan for performing system testing is the cornerstone of a successful software project. In the absence
of a good plan it is highly unlikely that the desired level of system testing is performed within the stipulated
time and without overusing resources such as manpower and money.
Moreover, in the absence of a good plan, it is highly likely that a low-quality product is delivered even at a
higher cost and later than the expected date.
The purpose of system test planning, or simply test planning, is to get ready and organized for test
execution. Starting a system test in an ad hoc way, after all the modules are checked in to the version control
system, is ineffective.
Working under deadline pressure, people, that is, test engineers, have a tendency to take shortcuts and to
“just get the job done,” which leads to the shipment of a highly defective product.
Consequently, the customer support group of the organization has to spend a lot of time in dealing with
unsatisfied customers and be forced to release several patches to demanding customers.
Test planning is essential in order to complete system testing and ship quality product to the market on
schedule.
Planning for system testing is part of overall planning for a software project. It provides the framework,
scope, resources, schedule, and budget for the system testing part of the project.
Test efficiency can be monitored and improved, and unnecessary delay can be avoided with a good test plan.
The purpose of a system test plan is summarized as follows: system test plan outline.
Introduction
Feature description
Assumptions
Test approach
Test suite structure
Test environment
Test execution strategy
Test effort estimation
Scheduling and milestones
It provides guidance for the executive management to support the test project, thereby allowing them to
release the necessary resources to perform the test activity.
It establishes the foundation of the system testing part of the overall software project.
It provides assurance of test coverage by creating a requirement traceability matrix.
It outlines an orderly schedule of events and test milestones that are tracked.
It specifies the personnel, financial, equipment, and facility resources required to support the system testing
part of a software project.
The activity of planning for system testing combines two tasks: research and estimation. Research allows us
to define the scope of the test effort and resources already available in-house.
Each major functional test suite consisting of test objectives can be described in a bounded fashion using the
system requirements and functional specification as references.
A system test plan is outlined in above Table.
The test plan is released for review and approval after the author, that is, the leader of the system test group,
completes it with all the pertinent details.
The review team must include software and hardware development staff, customer support group members,
system test team members, and the project manager responsible for the project.
The author(s) should solicit reviews of the test plan and ask for comments prior to the meeting.
The comments can then be addressed at the review meeting.
The system test plan must be completed before the software project is committed.
The Test Execution Strategy is a critical aspect of system testing, ensuring a systematic approach to
executing test cases, handling failures, and progressing towards a desired quality level. Here's a detailed
breakdown:
This detailed test execution strategy addresses various aspects of handling system testing, ensuring
controlled progress, efficient resource usage, and measured improvements in the system's quality across
multiple cycles.
33)Explain detail how the test Environment is created and accessed?
Designing a test environment, often constrained by budget and resources, requires innovative thinking to
effectively fulfill testing objectives. Here's a comprehensive breakdown of strategies, challenges, and steps
involved:
2. **Information Gathering:**
- Gather information on customer deployment architecture, including hardware, software, and
manufacturers.
- Obtain lists of third-party products, tools, and software for integration and interoperability testing.
- Identify hardware requirements for specialized features and new project hardware.
- Analyze test objectives (functional, performance, stress, load, scalability) to identify necessary
environment elements.
- Define security requirements to prevent disruptions during tests.
3. **Equipment Identification:**
- List necessary networking equipment like switches, hubs, servers, and cables for setting up the test lab.
- Identify accessories required for testing, such as racks, vehicles, and shielding to prevent interference.
5. **Equipment Procurement:**
- Review available in-house equipment and identify items that need to be procured.
- Create a test equipment purchase list with quantities, unit prices, maintenance costs, and justifications.
- Obtain quotes from suppliers for accurate pricing and finalize procurement orders.
34)Explain in detail the various Scheduling and Test Milestones in software testing?
Scheduling system testing involves meticulous planning and coordination to ensure efficient task execution
and timely achievement of milestones. Here's a step-by-step breakdown of effective scheduling for a test
project:
2. **Milestone Identification:**
- List major milestones including reviews, completion of test plans, test case creation, environment setup,
and system test cycles.
3. **Identify Interdependencies:**
- Understand how tasks and software milestones influence each other's flow.
4. **Resource Identification:**
- List and categorize available resources like human resources, hardware/software, their expertise,
availability, and capacity.
5. **Resource Allocation:**
- Allocate resources to each task considering availability and expertise required.
6. **Task Scheduling:**
- Schedule start and end dates of each task, considering dependencies and available resources.
7. **Milestone Insertion:**
- Insert remaining milestones into the schedule.
9. **Assumptions Documentation:**
- Document assumptions like hiring, equipment availability, and space requirements.
### Introduction:
- **Objective:** Describes the structure and objectives of the test plan.
- **Components:**
- **Test Project Name:** Identifies the project.
- **Revision History:** Tracks changes made to the plan.
- **Terminology and Definitions:** Clarifies any ambiguous terms.
- **Approvers' Names and Approval Date:** Indicates authorization.
- **References:** Lists any documents used as references.
- **Summary:** Provides an overview of the test plan's content.
### Assumptions:
- **Purpose:** Describes areas where test cases might not be designed due to specific reasons:
- **Equipment Availability:** Constraints related to scalability, third-party equipment procurement,
regulatory compliance, and environmental testing.
- **Importance:** These assumptions are crucial considerations while reviewing the test plan.
36)Discuss the Test Suite Structure needed for developing test case in detail?
A test suite serves as a container for organizing and managing multiple test cases that are related to one
another. Here are the key points about test suites and how they function:
### Summary:
Test suites are a vital part of organizing test cases and streamlining the testing process. They help testers
efficiently manage and execute tests by grouping related functionalities, scenarios, or modules together.
Whether manual or automated, test suites aim to ensure comprehensive test coverage and maintainability of
test cases within a project.
37) Discuss the various aspects that can follow in test approach?
### 1. Objective:
This section sets the overall aim of the testing plan, detailing the intended procedures and methodologies to
ensure the delivery of high-quality software. It comprises:
- **Functionality and Performance Testing:** Identifying what features and aspects of the application will
undergo testing.
- **Goal Setting:** Establishing specific objectives and targets based on the application's features and
functionalities.
### 2. Scope:
- **In-Scope:** Clearly defines the modules or functionalities that require rigorous testing.
- **Out of Scope:** Specifies the areas or modules exempted from intensive testing efforts.
- **Example:** Illustrates scenarios like purchasing modules from external sources and their limited testing
scope.
### 4. Approach:
- **High-Level Scenarios:** Defines crucial scenarios representing critical features for testing.
- **Flow Graph:** Utilized to streamline processes or depict complex interactions for efficient
understanding.
### 5. Assumptions:
- **Explanation:** Outlines assumptions made concerning the testing process.
- **Example:** Assures cooperation and support from the development team or adequate resource
allocation.
### 6. Risks:
- **Identify Risks:** Recognition of potential risks related to assumption failures.
- **Reasons:** Explains factors causing risks, such as managerial issues or resource scarcity.
### 9. Schedule:
- **Timeline:** Provides a detailed timeline indicating the start and end dates of various testing-related
activities.
- **Example:** Marks periods for test case writing, execution, and reporting.
Each component serves a specific purpose in planning, executing, and reporting on the testing process,
contributing to the overall success of the software testing effort.
39. Write in detail about system test
planning?
Test Plan
A test plan is a detailed document which describes software testing areas and activities. It outlines the test
strategy, objectives, test schedule, required resources (human resources, software, and hardware), test
estimation and test deliverables.
The test plan is a base of every software's testing. It is the most crucial activity which ensures availability of
all the lists of planned activities in an appropriate sequence.
The test plan is a template for conducting software testing activities as a defined process that is fully
monitored and controlled by the testing manager. The test plan is prepared by the Test Lead (60%), Test
Manager(20%), and by the test engineer(20%).
Master Test Plan is a type of test plan that has multiple levels of testing. It includes a complete test strategy.
Specific test plan designed for major types of testing like security testing, load testing, performance testing,
etc. In other words, a specific test plan designed for non-functional testing.
Making a test plan is the most crucial task of the test management process. According to IEEE 829, follow
the following seven steps to prepare a test plan.
#### Reliability
- **Manual Testing:** Vulnerable to human error, leading to less reliability.
- **Automated Testing:** More reliable due to consistent and repeatable test execution.
#### Maintenance
- Documenting and updating test results, scripts, and reports for future reference.
This is an extensive overview of the fundamentals, differences, types, and the process involved in
automated testing, emphasizing its benefits and key considerations.
UNIT-5
Sure, let's break down the details and significance of the Test Execution Working Document outline
provided in Table 5.1:
### 1. Test Engineers Section
- **Purpose:** Lists names, availability, and expertise of test engineers on the project.
- **Significance:** Aids in resource allocation and identifies areas where additional expertise may be
required.
This document structure outlines essential elements required for effective test execution management. It
focuses on resource allocation, skill enhancement, test case allocation, automation progress, test bed
readiness, defect resolution, and collaboration among team members throughout the testing lifecycle.
42) Discuss the various Metrics needed for Tracking System Testing?
Sure, let's delve into the details of metrics used to track system test execution and defect monitoring in
software development projects:
2. **Test Case Pass/Fail Ratio:** Analyzing the ratio of passed to failed test cases helps in assessing the
system's stability and identifying areas needing attention.
3. **Test Coverage:** Measures the extent to which the software's functionality has been tested. It includes
code coverage and functional coverage.
4. **Test Cycle Duration:** Measures the time taken to complete a test cycle, indicating the efficiency of
testing activities.
5. **Defect Discovery Rate:** Tracks the rate at which defects are being discovered during testing,
providing insight into the software's quality.
2. **Defect Aging:** Tracks how long defects remain unresolved, helping identify bottlenecks in the defect
resolution process.
3. **Severity Distribution:** Categorizes defects based on severity levels (critical, major, minor) to
prioritize resolution efforts.
4. **Defect Reopen Rate:** Measures the rate at which previously closed defects are reopened, indicating
the effectiveness of fixes.
5. **Root Cause Analysis:** Tracks the root causes of defects to identify recurring issues and implement
preventive measures.
2. **Risk Mitigation:** Early detection of high defect density or failing test cases allows for proactive
measures to address issues, reducing project risks.
4. **Process Improvement:** Post-project analysis using collected metrics aids in identifying areas for
process enhancement and future project planning.
By actively monitoring these metrics during system testing, management gains a clear understanding of the
project's health, enabling them to make informed decisions, take corrective actions, and ensure the
successful delivery of a high-quality product to the customer.
43) Illustrate the different Metrics for Monitoring Defect Reports in detail? Metrics for Monitoring
Test Execution
- **Decision-Making:** Enables informed decisions, corrective actions, and resource allocation to ensure
project success and quality product delivery.
By actively tracking these metrics, project teams gain visibility into test progress, system understanding,
defect trends, and resolution efficiency, empowering them to steer projects effectively, mitigate risks, and
maintain product quality.
44) Explain in detail about System Test Report
The structure and contents of a Final System Test Report are crucial for summarizing the test project's
outcomes and providing insights into the testing process. Let's dive into the different sections and elements
that compose this report:
- **Defects Summary:** Details the number of defects filed, their different states (e.g., irreproducible,
FAD, closed, shelved, duplicate), postponed, assigned, and open, providing a comprehensive view of defect
statuses.
- **Execution Details:** Records who, when, and where the code was tested, providing insights into the
execution process.
- **Impact Analysis:** If any configuration changes during testing led to defects, this section covers how
such changes affected the defects, testing scope, and coverage.
- **Documentation for Future Reference:** Serves as a reference for future projects, aiding in
understanding testing methodologies and outcomes.
The report aims to present a consolidated view of the test project's execution, outcomes, and defect statuses
to facilitate decision-making and future planning.
Software Quality:
Quality is a complex concept—it means different things to different people, and it is highly context
dependent.
User View: It perceives quality as fitness for purpose. According to this view, while evaluating the quality
of a product, one must ask the key question: “Does the product satisfy user needs and expectations?”
Manufacturing View: Here quality is understood as conformance to the specification. The quality level of a
product is determined by the extent to which the product meets its specifications.
Product View: In this case, quality is viewed as tied to the inherent characteristics of the product.
Value-Based View: Quality, in this perspective, depends on the amount a customer is willing to pay for it.
3. **Perfective Maintenance:** Enhances the software by adding new functionalities or improving existing
ones.
- **Quality Assurance:** The goal is to maintain and improve software quality post-release, ensuring that
patches and updates don't introduce new issues or affect existing functionalities negatively.
- **Continuous Improvement:** Experimentation and review processes aim to enhance the effectiveness of
testing strategies, contributing to ongoing improvements in the software maintenance process.
The sustaining phase is crucial in ensuring customer satisfaction, maintaining software quality, and adapting
the product to evolving needs and environments. It involves a proactive approach to address reported issues
swiftly and efficiently while continually enhancing the software's capabilities.
1. **Planned vs. Actual Execution (PAE) Rate:** Comparing the planned number of test cases against the
actual executed cases on a weekly basis.
2. **Estimating Execution Challenges:** Noting the initial performance challenges and unforeseen factors
like delayed stress and load tests due to late fixes.
3. **Execution Status of Test (EST) Cases:** Enumerating the status of executed test cases in categories
like passed, failed, invalid, blocked, and untested.
4. **Decision-Making on Test Cycle Abandonment:** Evaluating whether to abandon the test cycle based
on predefined criteria and actual execution figures.
5. **Decision Rationale:** Justifying the decision to continue the cycle despite not meeting planned test
execution numbers.
6. **Overall Observations and Lessons:** Reflecting on the learnings from the challenges faced during the
execution cycle.
These metrics allow teams to track progress, identify challenges, and make informed decisions during the
test execution process.
Let's consider a hypothetical project, "RocketBoost," where 800 test cases are designated for execution
within a 10-week test cycle. Here's a breakdown of the test execution metrics and scenarios:
- **Actual Execution:**
- Week 1: Execute 35 test cases.
- Week 2: 70 test cases were completed.
- Week 4: Achieved 180 test cases executed.
This hypothetical example demonstrates how discrepancies between planned and actual test execution rates,
coupled with procedural rectifications, influence decisions about abandoning or continuing a test cycle.
Such evaluations guide future planning and emphasize early preparedness to mitigate execution delays.
1. **Function as Designed (FAD) Count:** This metric evaluates the understanding level of test engineers
about the system. FADs represent reported defects that are not actual system issues but stem from a
misunderstanding. If the FAD count exceeds a certain threshold (e.g., 10%), it signals inadequate system
understanding. Lower FAD counts indicate better comprehension.
2. **Irreproducible Defects (IRD) Count:** IRDs are defects that can't be replicated or consistently
observed after initial reporting. Reproducibility is crucial for developers to understand and fix issues. High
IRD counts may imply communication or documentation gaps between testers and developers.
3. **Defects Arrival Rate (DAR) Count:** DAR measures the rate at which defect reports come in from
various sources (system testing, software development, SIT groups, etc.). This metric tracks how defects
accumulate during testing and from different contributors, aiding in resource allocation and issue
prioritization.
4. **Defects Rejected Rate (DRR) Count:** DRR assesses the rate at which reported defects are rejected
after attempted fixes. It reflects the complexity or validity of the reported issues. A high rejection rate may
indicate unclear defect reports or challenges in fixing those issues.
5. **Defects Closed Rate (DCR) Count:** DCR tracks the percentage of reported defects that undergo
successful resolution and subsequent verification by the testing team. It measures the efficiency of verifying
fixed issues, demonstrating the effectiveness of the resolution process.
6. **Outstanding Defects (OD) Count:** OD represents the number of unresolved defects at a given time.
These are reported issues that remain open and yet to be addressed. Tracking OD helps in prioritizing and
managing ongoing issues.
7. **Arrival and Resolution of Defects (ARD) Count:** ARD compares the rate of new defects being found
against the rate at which existing defects are resolved. It gives insights into the efficiency of the defect
identification and resolution process over time.
These metrics are valuable in understanding the quality of the product, the efficiency of defect resolution,
and the collaboration between different teams involved in the testing and development process. Tracking
them helps in proactive defect management and continuous improvement in the software development
lifecycle.
The shipment criteria are more than just the exit criteria of the final test cycle. This review should include
representatives from the key function groups responsible for delivery and support of the product, such as
engineering, operation, quality, customer support, and product marketing. A set of generic FCS readiness
criteria is as follows:
All the test cases from the test suite should have been executed. If any of the test cases is unexecuted, then
an explanation for not executing the test case should have been provided in the test factory database.
Test case results are updated in the test factory database with passed, failed, blocked, or invalid status.
Usually, this is done during the system test cycles.
The requirement database is updated by moving each requirement from the verification state to either the
closed or the decline state, as discussed in Chapter 11, so that compliance statistics can be generated from
the database. All the issues related to the EC must be resolved with the development group.
The pass rate of test cases is very high, say, 98%.
Once again, three weeks before the FCS, the FCS blocker defects are identified at the cross-functional
project meeting. These defects are tracked on a daily basis to ensure that the defects are resolved and closed
before the FCS.
In the meantime, as new defects are submitted, these are evaluated to determine whether these are FCS
blockers. If it is concluded that a defect is a blocker, then the defect is included in the blocker list and
tracked along with the other defects in the list for its closure.
However, calculating defects not found during testing can be complex. To approximate this, one approach is
to count defects reported by customers within a specified period post-deployment (e.g., six months). But to
interpret DRE effectively, several considerations are essential:
1. **Test Environment Limitations:** Certain defects may escape even in rigorous testing due to limitations
in the test environment. Deciding whether to include these defects in calculations depends on whether the
aim is to measure effectiveness inclusive of environment limitations.
2. **Defect Classification:** Differentiate defects that need corrective maintenance from those requiring
adaptive or perfective maintenance. Exclude the latter from the calculation as they often represent requests
for new features rather than actual issues.
3. **Consistency in Duration:** Determine a consistent start and end point for defect counting across all
test projects, like from unit testing to system testing, to maintain consistency.
4. **Long-Term Trend:** DRE should be seen as part of a long-term trend in test effectiveness for the
organization, not just a one-time project measure.
**Fault Seeding Approach:** This method injects a small number of representative defects into the system
to estimate the number of escaped defects. The percentage of these seeded defects uncovered by sustaining
test engineers, who are unaware of the seeding, helps extrapolate the extent to which unknown defects
might have been found.
**Spoilage Metric:** Defects introduced at various stages of software development (requirements, design,
coding) are gradually removed through different testing phases (unit, integration, system, acceptance
testing).
By understanding these nuances, teams can accurately gauge the effectiveness of their testing efforts,
recognizing limitations and applying strategies like fault seeding to estimate escaped defects, thereby
improving product quality over time.
REGARDS