ST Sem Answers

Download as pdf or txt
Download as pdf or txt
You are on page 1of 71

ST SEM ANSWERS

● READ AT YOUR OWN RISK


● REFER DIGITAL NOTES AND PPT

1)Define Software Quality? Explain the Various view of quality?


Software Quality:

Quality is a complex concept—it means different things to different people, and it is highly
context dependent.

Five views of quality:


Transcendental View: It envisages quality as something that can be recognized but is difficult
to define. The transcendental view is not specific to software quality alone but has been
applied in other complex areas of everyday life.

User View: It perceives quality as fitness for purpose. According to this view, while
evaluating the quality of a product, one must ask the key question: “Does the product satisfy
user needs and expectations?”

Manufacturing View: Here quality is understood as conformance to the specification. The


quality level of a product is determined by the extent to which the product meets its
specifications.

Product View: In this case, quality is viewed as tied to the inherent characteristics of the
product.

Value-Based View: Quality, in this perspective, depends on the amount a customer is willing
to pay for it.

2. List & Explain the various stages in Sources Of Information For


Test Case Selection?
SOURCES OF INFORMATION FOR TEST CASE SELECTION:
Designing test cases has continued to stay in the foci of the research
community and the practitioners. A software development process generates a
large body of information, such
as requirements specification, design document, and source code. In order to generate
effective tests at a lower cost, test designers analyze the following sources of information:
⮚ Requirements and functional specifications
⮚ Source code
⮚ Input and output domains
⮚ Operational profile
⮚ Fault model

1.8.1: Requirements and Functional Specifications:


⮚ The process of software development begins by capturing user needs.
⮚ The nature and amount of user needs identified at the beginning of system
development will vary depending on the specific life-cycle model to be
followed.

1.8.2: Source Code :


⮚ Whereas a requirements specification describes the intended behavior of a system, the
source code describes the actual behavior of the system.
⮚ High-level assumptions and constraints take concrete form in
an implementation.
⮚ Though a software designer may produce a detailed design, programmers

may Introduce additional details into the system.

1.8.3: Input and Output Domains:


⮚ Some values in the input domain of a program have special meanings, and hence
must be treated separately.
⮚ To illustrate this point, let us consider the factorial function.
⮚ The factorial of a nonnegative integer n is computed as

follows: factorial(0) = 1;

factorial(1) = 1;
factorial(n) = n * factorial(n-1);
1.8.4: Operational Profile:
⮚ As the term suggests, an operational profile is a quantitative characterization
of how a system will be used.
⮚ It was created to guide test engineers in selecting test cases (inputs) using
samples of system usage.

1.8.5: Fault Model:


⮚ Previously encountered faults are an excellent source of information in
designing new test cases.

3)Explain in detail about White box Testing?

WHITE BOX TESTING:


White box testing is a testing technique that examines the program
structure and derives test data from the program logic/code. The other names of
glass box testing are clear box testing, open box testing, logic driven testing or
path driven testing or structural testing.
✔ The various white box techniques are
● Statement Coverage - This technique is aimed at exercising
all programming statements with minimal tests.
● Branch Coverage - This technique is running a series of tests
to ensure that all branches are tested at least once.
● Path Coverage - This technique corresponds to testing all
possible paths which means that each statement and branch is
covered.
Basis Path Testing:
Basis path testing, a structured testing or white box testing technique used
for designing test cases intended to examine all possible paths of execution at least
once. Creating and executing tests for all possible paths results in 100% statement
coverage and 100% branch coverage.
Flow graph notation:
✔ It is the standard notation used for representing the control flow within
the program. It is also called as program graph.
✔ The following represents the different flow graph notation
2 Sequential notation:
2.8 If the program are linear (i.e.) one after another.

3 If else flow graph:


3.8 Check the condition; condition is true executing one condition; if
false execute another and then coming to normal flow.

4 If else flow graph:


4.8 Check the condition; condition is true executing one condition; if
false execute another and then coming to normal flow.
5 While flow graph:
5.8 Check the condition. If the condition is true execute the statements
and again check the condition again and again and if false came out
of the loop.

6 Repeat until flow graph:


6.8 The statements are executed once and after check the condition and if
true repeat it again and again and if false coming out.

7 Case flow graph:


7.8 Checking a condition and by the result of the condition choose one
case from different cases and then normal flow.

7.9 A circle represents statements or nodes.


7.10 Represents edges or links
4)Explain in detail about Black box Testing?

Sure, black box testing is like solving a puzzle without knowing what's inside the box. Imagine you
have a closed box, and you can only interact with it through certain inputs, but you can't see what
happens inside. So, you try different inputs and observe the outputs to make sure the box works
correctly.

✔ This method attempts to find errors in the following categories:


1) Interface error.
2) Errors in data structures.
3) Incorrect or missing or functions.
4) Initialization and termination errors.
5) Behavior or performance errors.
✔ The following are the different types of black box testing:
1) Graph based testing
2) Equivalence partitioning
3) Boundary value analysis
4) Orthogonal array testing
There are a few methods under black box testing:

1. **Graph-based testing:** This involves creating a sort of map that shows the different parts of
the system and how they're connected. Think of it like drawing circles for different parts and lines
between them to show how they relate. Then, you test these connections to ensure they work
properly.
a) Unidirectional: it represents only one direction.
b) Bidirectional: a link that represents both directions.
c) Undirected: a link that does not represent any direction.
d) Parallel: used to represent more than one link (i.e.) multiple links.
✔ There are 2 properties of node and link
a) Node weight: represents the properties of node.
b) Link weight: represents the characteristics of link

2. **Equivalence partitioning:** This method organizes input data into groups. For instance, if a
program accepts numbers from 1 to 1000, instead of testing every single number, you'd categorize
them into three groups: numbers from 1 to 1000, numbers below 1, and numbers above 1000.
Testing a few numbers from each group helps cover all possibilities.

3. **Boundary value analysis:** This is like testing the edges of what the system can handle. For
instance, if a program accepts numbers from 1 to 1000, you'd test the numbers 1, 1000, and a
couple just below and above those limits. Errors often show up at the boundaries, so this helps
catch those issues.

4. **Orthogonal array testing (OAT):** This is a more organized way of testing when dealing with
complex systems. Instead of testing every possible combination of inputs, OAT helps pick out a
smaller number of tests that cover most scenarios efficiently. For example, if you have three things
to test, each with three options, instead of doing 27 tests (3 x 3 x 3), OAT can figure out nine tests
that cover a lot of ground.

Overall, black box testing is about checking if the system behaves as expected without needing to
know its internal workings. It's like being a detective, figuring out how the pieces fit together and
making sure everything works smoothly.
5)List & Explain the Role of testing in detail?
ROLE OF TESTING
⮚ Testing plays an important role in achieving and assessing the quality of software
Product.
⮚ Software quality assessment can be divided into two broad categories, namely,
static analysis and dynamic analysis.

Static Analysis:
⮚ As the term “static” suggests, it is based on the examination of a number of
documents, namely requirements documents, software models, design documents, and
source code.
⮚ Traditional static analysis includes code review, inspection, walk-through,
algorithm analysis, and proof of correctness.
⮚ It does not involve actual execution of the code under development. Instead, it
examines code and reasons over all possible behaviours that might arise during run time.
Compiler optimizations are standard static analysis.

Dynamic Analysis:
Dynamic analysis of a software system involves actual program execution in order to
expose possible program failures.
⮚ The behavioural and performance properties of the program are also observed.
⮚ Programs are executed with both typical and carefully chosen input values.
⮚ Often, the input set of a program can be impractically large.
⮚ However, for practical considerations, a finite subset of the input set can be selected.

What is static testing?

Static testing is a software testing method that examines a program -- along with any
associated documents -- but does not require the program to be executed. Dynamic testing,
the other main category of software testing, requires testers to interact with the program while
it runs. The two methods are frequently used together to ensure the basic functionalities of a
program.

Instead of executing the code, static testing is a process of checking the code and designing
documents and requirements before it's run to find errors. The main goal is to find flaws in
the early stages of development because it is normally easier to find the sources of possible
failures this way.

What is subject to static testing?

It's common for code, design documents and requirements to be static tested before the
software is run to find errors. Anything that relates to functional requirements can also be
checked. More specifically, the process will involve reviewing written materials that provide
a wider view of the tested software application as a whole. Some examples of what's tested
include the following:

● requirement specifications

● design documents

● user documents

● webpage content

● source code

● test cases, test data and test scripts

● specification and matrix documents

Types of static testing reviews


The first step in static testing is reviews. They can be conducted in numerous ways and look
to find and remove errors found in supporting documents. This process can be carried out in
four different ways:

● Informal. Informal reviews will not follow any specific process to find errors.
Co-workers can review documents and provide informal comments.

● Walk-through. The author of the document in question will explain the document to
their team. Participants will ask questions and write down any notes.

● Inspection. A designated moderator will conduct a strict review as a process to find


defects.

● Technical or peer reviews. Technical specifications are reviewed by peers to detect any
errors.

Dynamic Testing
It is a type of Software Testing which is performed to analyze the dynamic behavior of the
code. It includes the testing of the software for the input values and output values that are
analyzed. Dynamic Testing is basically performed to describe the dynamic behavior of code.
It refers to the observation of the physical response from the system to variables that are not
constant and change with time. To perform dynamic testing the software should be compiled
and run. It includes working with the software by giving input values and checking if the
output is as expected by executing particular test cases which can be done with either
manually or with automation process. In 2 V’s i.e., Verification and Validation, Validation is
Dynamic Testing.
Levels of Dynamic Testing
There are various levels of Dynamic Testing. They are:

● Unit Testing

● Integration Testing

● System Testing

● Acceptance Testing
There are several levels of dynamic testing that are commonly used in the software
development process, including:

8 Unit testing: Unit testing is the process of testing individual software


components or “units” of code to ensure that they are working as intended. Unit
tests are typically small and focus on testing a specific feature or behavior of the
software.

9 Integration testing: Integration testing is the process of testing how different


components of the software work together. This level of testing typically
involves testing the interactions between different units of code, and how they
function when integrated into the overall system.

10 System testing: System testing is the process of testing the entire software
system to ensure that it meets the specified requirements and is working as
intended. This level of testing typically involves testing the software’s
functionality, performance, and usability.

11 Acceptance testing: Acceptance testing is the final stage of dynamic testing,


which is done to ensure that the software meets the needs of the end-users and is
ready for release. This level of testing typically involves testing the software’s
functionality and usability from the perspective of the end-user.

6) Explain the various issues in testing?


Testing is a crucial part of software development, but it also comes with its fair share of challenges
and issues. Here are some of the key problems that can arise during testing:

1. **Incomplete Requirements:** When the requirements for the software are not clear or are
constantly changing, it becomes challenging to create accurate test cases. Without a clear
understanding of what the software is supposed to do, testing becomes less effective.

2. **Time and Budget Constraints:** Often, there's pressure to complete testing within tight
deadlines or limited budgets. This might lead to rushed testing, cutting corners, or inadequate
coverage, risking the quality of the final product.
3. **Lack of Resources:** Testing requires skilled professionals, tools, and infrastructure.
Sometimes there might be a shortage of skilled testers, insufficient testing environments, or
outdated tools, hampering the testing process.

4. **Complexity of Systems:** Modern software systems are complex, with intricate interactions
between various components. Testing such systems comprehensively becomes a daunting task, and
ensuring all scenarios are covered becomes increasingly challenging.

5. **Dependency on Human Judgment:** Testing involves human judgment, which can introduce
biases or overlook certain issues. Testers might miss edge cases or make assumptions that lead to
overlooking critical defects.

6. **Regression Testing:** As software evolves and new features are added or bugs are fixed,
regression testing becomes essential to ensure that changes haven't introduced new issues
elsewhere in the system. This can be time-consuming and resource-intensive.

7. **Environment and Data Management:** Testing requires specific environments that mimic the
real-world conditions where the software will be used. Managing these environments and ensuring
accurate and relevant test data can be complex.

8. **Automated Testing Challenges:** While automated testing can improve efficiency, creating
and maintaining automated test scripts requires time and effort. Maintenance becomes crucial as
the software evolves, and changes in the application can cause automated tests to become outdated.

9. **Communication and Collaboration:** Ineffective communication between development,


testing, and other stakeholders can lead to misunderstandings, delays in bug fixes, or mismatches
between expectations and actual results.

10. **Defect Tracking and Management:** Managing a large number of identified defects,
prioritizing them, and ensuring they get addressed appropriately is a significant challenge in
testing. Without a robust system for defect tracking, some issues might slip through the cracks.

7)Describe the various Testing Activities in detail?

TESTING ACTIVITIES:

In order to test a program, a test engineer must perform a sequence of testing activities. Most of
these activities have been shown in Figure 1.6 and are explained in the following. These
explanations focus on a single test case.
Identify an objective to be tested:
The first activity is to identify an objective to be tested. The objective defines the intention, or
purpose, of designing one or more test cases to ensure that the program supports the objective.
A clear purpose must be associated with every test case .

: Select inputs:
The second activity is to select test inputs. Selection of test inputs can be based on the requirements
specification, the source code, or our expectations.
Test inputs are selected by keeping the test objective in mind.

Compute the expected outcome:


The third activity is to compute the expected outcome of the program with the selected inputs.
In most cases, this can be done from an overall, high-level understanding of the test objective and
the specification of the program under test.
Set up the execution environment of the program:
The fourth step is to prepare the right execution environment of the program. In this step all the
assumptions external to the program must be satisfied.
A few examples of assumptions external to a program are as follows:
Initialize the local system, external to the program.
This may include making a network connection available, making the right database system
available, and so on.
Initialize any remote, external system

Execute the program:


In the fifth step, the test engineer executes the program with the selected inputs and observes the
actual outcome of the program.
To execute a test case, inputs may be provided to the program at different physical locations at
different times.
The concept of test coordination is used in synchronizing different components of a test
case.
6.6 Analyze the test result:
The final test activity is to analyze the result of test execution. Here, the main task is to
compare the actual outcome of program execution with the expected outcome.
The complexity of comparison depends on the complexity of the data to be observed.
The observed data type can be as simple as an integer or a string of characters or as Complex as an
image, a video, or an audio clip.

8 )Explain the concepts of unit, integration, system, acceptance, and regression testing

Absolutely, let's take a deep dive into each level of software testing:

### 1. Unit Testing:


- **What it is:** Unit testing involves testing individual units or components of the software
independently. These units can be functions, methods, classes, or modules.
- **Purpose:** To ensure that each unit performs as expected and meets its design and functional
specifications.
- **Scope:** Focuses on isolated and small portions of the codebase.
- **Performed by:** Primarily carried out by developers during the coding phase.
- **Key Objectives:**
- Validating that each unit functions correctly.
- Verifying that the code behaves as intended.
- Checking if the units adhere to design and functional requirements.
- **Testing Techniques:** Developers use stubs (dummy code) or drivers (test code) to simulate
dependencies while testing the unit.

### 2. Integration Testing:


- **What it is:** Integration testing is about testing how various units/modules/components
integrate and work together.
- **Purpose:** To ensure that integrated units cooperate and interact correctly as per the design.
- **Scope:** Focuses on checking the interactions, data flow, and communication between
different modules.
- **Performed by:** Test engineers and developers working together.
- **Key Objectives:**
- Validating that integrated units work together without issues.
- Detecting interface defects between modules.
- Ensuring data communication between modules is accurate.
- **Testing Techniques:** Incremental integration (step-by-step combining units) and Big Bang
(combining all units at once) are two common approaches.

### 3. System Testing:


- **What it is:** System testing involves testing the entire software system as a whole, including
its functional and non-functional aspects.
- **Purpose:** To validate if the complete software system meets the specified requirements.
- **Scope:** Covers end-to-end testing in an environment that simulates or mirrors the production
environment.
- **Performed by:** Test engineers in a dedicated testing environment.
- **Key Objectives:**
- Verifying the system's functionality against specifications.
- Assessing non-functional aspects like performance, security, and usability.
- Ensuring the software meets business needs.
- **Testing Techniques:** Includes functional testing, usability testing, performance testing,
security testing, etc.

### 4. Acceptance Testing:


- **What it is:** Acceptance testing validates if the software fulfills the user or customer
requirements.
- **Purpose:** To ensure that the software is ready for release by checking if it meets business
needs and user expectations.
- **Scope:** Involves end-users testing the product in a real or simulated environment.
- **Performed by:** Customers or end-users.
- **Key Objectives:**
- Confirming that the software meets user needs and business requirements.
- Verifying if the software is user-friendly and aligns with real-world scenarios.
- Assessing if the software is ready for deployment/release.
- **Testing Techniques:** User Acceptance Testing (UAT), Business Acceptance Testing (BAT),
Alpha testing, Beta testing, etc.

### Regression Testing:


- **What it is:** Regression testing involves re-testing the unchanged parts of the software after
modifications or new features have been introduced.
- **Purpose:** To ensure that recent changes or fixes have not affected the existing functionalities
of the software.
- **Scope:** Re-validates existing functionalities that are not impacted by changes.
- **Performed by:** Test engineers, often using automated test scripts.
- **Key Objectives:**
- Checking that existing functionalities remain intact after changes.
- Identifying and fixing any new bugs introduced by changes.
- Verifying the stability of the overall system.
- **Testing Techniques:** Automated test suites are commonly used to run the previously created
test cases efficiently.

Each level of testing plays a critical role in ensuring the software's reliability, functionality, and
alignment with user expectations before it is released or deployed.
9)Give brief explanation about verification and validation
VERIFICATION AND VALIDATION

1.3 Verification:
⮚ This kind of activity helps us in evaluating a software system by
determining whether the product of a given development phase
satisfies the requirements established before the start of that phase.
Validation:
⮚ Activities of this kind help us in confirming that a product meets its
intended use.
⮚ Validation activities aim at confirming that a product meets its
customer’s expectations.
1.4 OBJECTIVES OF TESTING:
⮚ The stakeholders in a test process are the programmers, the test engineers, the
project managers, and the customers.
⮚ A stakeholder is a person or an organization who influences a system’s behaviors
or who is impacted by that system.
⮚ Different stakeholders view a test process from different perspectives as explained
below:
1.4.1 It does work:
⮚ While implementing a program unit, the programmer may want to test whether or
not the unit works in normal circumstances.
⮚ The programmer gets much confidence if the unit works to his or her satisfaction.
The same idea applies to an entire system as well—once a system has been
integrated, the developers may want to test whether or not the system performs the
basic functions.
⮚ Here, for the psychological reason, the objective of testing is to show that the
system works, rather than it does not work.
1.5.2 It does not work:
⮚ Once the programmer (or the development team) is satisfied that a unit (or the
system) works to a certain degree, more tests are conducted with the objective of
finding faults in the unit (or the system).
⮚ Here, the idea is to try to make the unit (or the system) fail.
1.4.3. Reduce the risk of failure:
⮚ Most of the complex software systems contain faults, which cause the system
to fail from time to time.
⮚ This concept of “failing from time to time” gives rise to the notion of failure
rate.
⮚ As faults are discovered and fixed while performing more and more tests, the
failure rate of a system generally decreases.
⮚ Thus, a higher level objective of performing tests is to bring down the risk of
failing to an acceptable level.
1.4.4: Reduce the cost of testing:
⮚ The different kinds of costs associated with a test process include
⮚ The cost of designing, maintaining, and executing test cases,
⮚ The cost of analysing the result of executing each test case,
⮚ The cost of documenting the test cases, and
⮚ The cost of actually executing the system and documenting it.
10)Write short notes about Test Planning and Design, & how will Monitoring and Measuring
Test Execution
### Test Planning and Design:

**Test Planning:**
- **Purpose:** To organize for test execution by defining the framework, scope, resources needed,
effort required, schedule, and budget.
- **Components:**
- **Framework:** Set of ideas or circumstances guiding the tests.
- **Scope:** Domain or extent of test activities, covering managerial aspects.
- **Details:** Outlining resource needs, effort, activity schedules, and budget.

**Test Design:**
- **Purpose:** To critically study system requirements, identify testable system features, define
test objectives, and specify test case behavior.
- **Steps Involved:**
- Critical study of system requirements.
- Identification of testable system features.
- Defining test objectives based on requirements and functional specifications.
- Designing test cases for each test objective.
- Creating modular test components called test steps within test cases.
- Combining test steps to form complex, multistep tests.
- Ensuring clear and understandable test case specifications for ease of use and reuse.

### Monitoring and Measuring Test Execution:

**Importance of Monitoring and Measurement:**


- **Principles:** Essential in scientific and engineering endeavors, applicable to testing phases in
software development.
- **Purpose:** To monitor progress, reveal system quality levels, and trigger corrective actions
based on metrics.
- **Types of Metrics:**
- Metrics for monitoring test execution process.
- Metrics for monitoring defects found during test execution.
- **Regular Analysis:** Metrics need to be tracked and analyzed periodically, say, on a daily or
weekly basis.

**Effective Use of Metrics:**


- **Decision Making:** Vital for controlling test projects, triggering revert criteria, and initiating
root cause analysis of problems.
- **Valid Information:** Ensuring gathering of accurate and valid information about the project
status.
- **Decision-Informing Data:** Metrics must enable management decisions that result in reduced
costs, minimized delays, and improved software quality.
- **Importance of Analysis:** Metrics hold meaning when analyzed for decision-making rather
than just collecting raw data.

**Bottom Line:** Metrics gathered during test execution provide critical insights that aid in
decision-making, allowing management to make informed choices for effective project control,
quality improvement, and cost reduction.

UNIT-2
11)Explain the Concept of Integration Testing

Integration Testing Concept:


Integration testing is about assembling individual software modules to construct a functioning
system. Here’s a deeper look into this concept:

**Understanding Modules and Systems:**


- **Modules:** Self-contained elements like subroutines, functions, or classes with well-defined
interfaces.
- **System:** A collection of interconnected modules working together to achieve a specific
objective.

**Unit Testing and Module Control:**


- **Unit Testing:** Individual modules are tested independently (unit testing) by programmers
using white-box techniques.
- **Control:** At the unit testing level, the system exists as separate pieces controlled by the
programmers.

**Challenges in Integration:**
- **Interface Errors:** Assembling a complete system from modules encounters interface errors
due to their interconnections.
- **Stability Testing:** Creating a stable system from components requires extensive testing.
- **Phases in Building a Deliverable System:** Integration testing and system testing are key
phases in constructing a deliverable system.

**Objectives of Integration Testing:**


- **Stable System Assembly:** To construct a reasonably stable system in a controlled
environment for further testing.
- **Addressing Module Challenges:** Identifying error-prone modules and ensuring their
integration with minimal disruptions to the overall system.

**Importance of Integration Testing:**


- **Diverse Module Origins:** Modules are created by different developers, requiring integration
to form a coherent system.
- **Controlled Unit Testing Environment:** Unit tests are conducted in a controlled environment
using test drivers and stubs.
- **Varying Module Complexity:** Modules vary in complexity, and identifying error-prone ones
is crucial.

**System Integration Testing:**


- **Objective:** Building a working system by incrementally adding modules and ensuring new
integrations don’t disrupt existing functionalities.
- **System Stability:** Ensuring that added modules function as expected without disturbing the
already assembled components.

### Various Types of Interfaces:

**Importance of Interfaces:**
- **Facilitate Functionality Realization:** Modules interface to realize functional requirements and
allow one module to access services from another.
- **Control and Data Transfer:** Interfaces establish mechanisms for passing control and data
between modules.

12)Discuss in detail the Various Types of Interfaces


⮚ Modularization is an important principle in software design, and modules are
interfaced with other modules to realize the system’s functional requirements.
⮚ An interface between two modules allows one module to access the service provided
⮚ by the other.
⮚ It implements a mechanism for passing control and data between modules.

Three common paradigms for interfacing modules are as follows:


1. Procedure Call Interface:
⮚ A procedure in one module calls a procedure in another module.
⮚ The caller passes on control to the called module.
⮚ The caller can pass data to the called procedure, and the called procedure can
pass data to the caller while returning control back to the caller.
2. Shared Memory Interface:
⮚ A block of memory is shared between two modules.
⮚ The memory block may be allocated by one of the two modules or a third
module.
⮚ Data are written into the memory block by one module and are read from the
block by the other.
3. Message Passing Interface:
⮚ One module prepares a message by initializing the fields of a data structure
and sending the message to another module.
⮚ This form of module interaction is common in client–server-based systems
and web-based systems.
13)Discuss in detail the Various Types of Interface Errors?
The issues you've outlined encompass a wide range of challenges and errors that can occur during
software development. Let's break down and delve deeper into these issues:

### 1. Inadequate Functionality:


- **Implicit Assumptions:** Assumptions made about one part of the system's functionality that
another part would provide. When the expected functionality isn't delivered, it leads to errors.

### 2. Location of Functionality:


- **Disagreements or Misunderstandings:** Disputes or misunderstandings regarding where
specific functional capabilities lie within the software. It can arise due to design methodology or
inexperienced personnel.

### 3. Changes in Functionality:


- **Impact of Module Changes:** Modifying one module without adjusting related modules
properly can disrupt the program's functionality.

### 4. Added Functionality:


- **Undocumented Modifications:** Adding new functionality without proper documentation or
change requests might cause errors in version-controlled systems.

### 5. Misuse of Interface:


- **Interface Errors:** Mistakes made in using the interface of a called module, often seen in
procedure-call interfaces.

### 6. Misunderstanding of Interface:


- **Conflicting Assumptions:** Discrepancies between assumptions made by a called module
regarding parameter conditions and what the calling module passes.

### 7. Data Structure Alteration:


- **Design Specification Issues:** Problems with data structure size or field inadequacies due to
insufficient high-level design specifications.

### 8. Inadequate Error Processing:


- **Error Handling Issues:** Failures in handling error codes returned by called modules.

### 9. Additions to Error Processing:


- **Error Handling Modifications:** Errors resulting from changes in modules that impact error
handling techniques or miss necessary functionality.

### 10. Inadequate Postprocessing:


- **Resource Release Failures:** Errors due to not releasing resources (like memory) that are no
longer needed.

### 11. Inadequate Interface Support:


- **Mismatch in Interface Capability:** The functionality provided by the interface is insufficient
to support specified capabilities.

### 12. Violation of Data Constraints:


- **Data Relationship Mismatch:** Implementations not supporting specified relationships among
data items due to incomplete design specs.

### 13. Timing/Performance Problems:


- **Synchronization Errors:** Inadequate synchronization between processes leading to race
conditions or unexpected events.

### 14. Coordination of Changes:


- **Communication Issues:** Failures in communicating changes between interrelated software
modules.

### 15. Hardware/Software Interfaces:


- **Inadequate Hardware Integration:** Errors arising from software failing to handle hardware
devices properly.

Addressing these issues requires meticulous attention to detail, clear communication among
development teams, adherence to design specifications, robust testing practices, and rigorous code
reviews. By understanding these potential pitfalls, software developers and project managers can
proactively mitigate risks during the development lifecycle, leading to more robust and reliable
software systems.

14)Explain in detail the various System Integration Techniques with suitable example?

1. **Top-Down Integration:**
- Create a hierarchical structure with modules at different levels.
- Start with the top-level module at the highest position.
- Draw arrows downwards, connecting top-level modules to their lower-level modules.
- Show integration happening incrementally by adding modules and connections in subsequent diagrams
(Fig 2.1 to Fig 2.7).

Fig 2.2 Top-down integration of modules A and B.


Fig 2.3 Top-down integration of modules A, B, and D.

Figure 2.4 Top-down integration of modules A, B, D, and C.

Figure 2.5 Top-down integration of modules A, B, C, D, and E

Figure 2.6 Top-down integration of modules A, B, C, D, E, and F.

Figure 2.7 Top-down integration of modules A, B, C, D, E, F and G.


2. **Bottom-Up Integration:**
- Begin with the lowest-level modules at the bottom.
- Create separate diagrams showing the integration of these lower-level modules.
- Gradually integrate higher-level modules with already integrated lower-level modules.
- Illustrate this step-by-step integration in different diagrams (Fig 2.8 to Fig 2.10).

Figure 2.8 Bottom-up integration of modules E, F, and G

Figure 2.9 Bottom-up integration of modules B, C, and D with E, F, and G

Figure 2.10 Bottom-up integration of module A with all others.


3. **Sandwich and Big Bang:**
- For the sandwich approach, create a three-layer structure (top, middle, bottom).
- Show integration from both top-down and bottom-up approaches happening concurrently or sequentially
in these layers.
- For the big bang, create a diagram representing all modules being integrated simultaneously after
individual testing.

Utilize diagrams with shapes representing modules (boxes or circles) and arrows indicating the integration
flow. Each diagram should build upon the previous one, showing the incremental or hierarchical integration
process as described in the text explanations.

15)Elaborate the Granularity of System Integration Tests strategies in detail?


GRANULARITY OF SYSTEM INTEGRATION TESTING:
System integration testing is performed at different levels of granularity.
Integration testing includes both white- and black-box testing approaches.
Black-box testing ignores the internal mechanisms of a system and focuses solely on the outputs generated
in response to selected inputs and execution conditions.
The code is considered to be a big black box by the tester who cannot examine the internal details of the
system.

The tester knows the input to the black box and observes the expected outcome of the execution.
White-box testing uses information about the structure of the system to test its correctness.
It takes into account the internal mechanisms of the system and the modules.

Intrasystem Testing:
This form of testing constitutes low-level integration testing with the objective of combining the modules
together to build a cohesive system.
The process of combining modules can progress in an incremental manner akin to constructing and testing
successive builds,

Intersystem Testing:
Intersystem testing is a high-level testing phase which requires interfacing independently tested systems.
In this phase, all the systems are connected together, and testing is conducted from end to end.
The term end to end is used in communication protocol systems, and end- to-end testing means initiating a
test between two access terminals interconnected by a network.
The purpose in this case is to ensure that the interaction between the systems work together, but not to
conduct a comprehensive test.

Pairwise Testing:
There can be many intermediate levels of system integration testing between the above two extreme levels,
namely intrasystem testing and intersystem testing.
Pairwise testing is a kind of intermediate level of integration testing.
In pairwise integration, only two interconnected systems in an overall system are tested at a time.
The purpose of pairwise testing is to ensure that two systems under consideration can function together,
assuming that the other systems

16)Illustrate how Test Plan is generated For System Integration in detail?


The System Integration Testing (SIT) plan is a comprehensive strategy to ensure that the integration process
of various modules into a cohesive system happens smoothly. Here's a detailed explanation of some aspects
outlined in the plan:

**Framework for SIT Plan:**


- **Controlled Execution Environment:** Ensure a controlled environment for executing integration tests.
This might include specific hardware configurations, software simulators, emulators, and specialized test
tools.
- **Communication:** Facilitate effective communication between developers and test engineers for
seamless integration.
- **Timeframe:** Acknowledge that system integration is a time-consuming process, often taking months,
and requires meticulous planning.
**Integration Levels Structure:**
- **Phases of Integration:** Break down integration into functional, end-to-end, and endurance phases.
- **Modules for Integration:** Define which modules will integrate in each phase.
- **Build Process and Schedule:** Determine how often builds will occur (daily, weekly, etc.) and set
schedules for each testing phase.
- **Test Environment and Resources:** Specify hardware, software, and testing techniques required for
each phase.

**Entry and Exit Criteria:**


- **Entry Criteria:** Conditions that must be met before starting an integration phase, ensuring the system
is ready for integration.
- **Exit Criteria:** Conditions signifying the completion of an integration phase, allowing progress to the
next phase.

**Integration Test Categories:**


- **Interface Integrity:** Testing internal and external interfaces to ensure communication accuracy and
formatting.
- **Functional Validity Tests:** Uncovering functional errors within modules after integration.
- **End-to-End Validity Tests:** Ensuring the entire system works cohesively from start to end.
- **Pairwise Validity Tests:** Verifying that pairs of systems work correctly when connected.
- **Interface Stress Tests:** Applying stress to module interfaces to assess load handling capacity.

**System Endurance Tests:**


- **System Stability:** Verifying that the integrated system remains operational for an extended period
without crashes or failures.
- **Stress Testing:** Subjecting the system to various stress scenarios such as error handling, event
handling, and resource limitations to gauge robustness.

The plan focuses on meticulous testing at various integration levels, ensuring seamless interaction between
modules, and evaluating the system's endurance and robustness under various stress conditions. It also
outlines precise entry and exit criteria for each phase, maintaining quality control throughout the integration
process.

17)Explain in detail Software and Hardware Integration?


1. **Diagnostic Test:**
- It's a fundamental test embedded in the system's BIOS that automatically runs during startup.
- This test aims to ensure the hardware's basic functionality by verifying if all essential hardware
components are operational.
- It acts as a preliminary check to identify any faulty hardware before the system proceeds with its regular
operations.

2. **Electrostatic Discharge Test (ESD):**


- ESD poses a risk to electronic systems, potentially damaging hardware components.
- Testing involves simulating electrostatic discharges using models like the Human Body Model (HBM),
Machine Model (MM), and Charged Device Model (CDM).
- Each model simulates different scenarios—HBM simulates discharges caused by human contact, MM
replicates discharges from machinery, and CDM mimics charges accumulated in devices.
- By subjecting hardware to these discharges, engineers assess the system's robustness against ESD.

3. **Electromagnetic Emission Test:**


- Measures and ensures that the system doesn't emit excessive electromagnetic radiation that could
interfere with other equipment or its own operation.
- Different types of emissions, such as electric and magnetic field radiated emissions, as well as conducted
emissions through various power and signal leads, are evaluated.
- Compliance with emission standards and regulations is a critical consideration during this test.

4. **Thermal Tests:**
- These tests evaluate how the system performs under varying temperature and humidity conditions.
- Hardware components are subjected to different temperature and humidity cycles, simulating real-world
conditions.
- Thermal sensors on heat-generating components monitor whether they exceed their specified operating
temperatures.
- Thermal shock tests replicate sudden, extreme temperature changes to assess the system's resilience.

5. **Environmental Tests:**
- Simulates real-world stress conditions like vibrations, shocks, extreme temperatures, and other
environmental factors.
- It's vital to ensure that external factors like vibrations from machinery or environmental elements like
smoke and sand do not adversely affect the system's performance or cause damage.

6. **Equipment Handling and Packaging Test:**


- Evaluates the system's durability during shipping and installation processes.
- Assessments include testing packaging design to ensure the safe transport of the system without damage.
- It also examines the design's ergonomics to verify that it can be easily handled during installation.

7. **Acoustic Test:**
- Measures the system's noise emission levels to ensure compliance with safety regulations.
- Avoids excessive noise that could affect personnel working in proximity to the system.

8. **Safety Test:**
- Focuses on identifying and eliminating potential hazards posed by the system to users or the
environment.
- Ensures that components like batteries do not leak dangerous substances and adhere to safety standards.

9. **Reliability Test:**
- Assesses the likelihood of hardware components failing over their operational lifespan.
- MTBF, or Mean Time Between Failures, is calculated to estimate the expected lifespan of components
before failure.

These comprehensive hardware tests are integral to ensuring that the hardware components are reliable,
durable, safe, and compliant with regulatory standards before integrating them into the larger system
involving software components. This rigorous testing mitigates risks and ensures the system's stability and
reliability in real-world conditions.

18) Difference between Unit and Integration Testing:


19.Does automation of integration tests help the verification of the daily build process? Justify your
answer.

1. Efficiency and Speed: Automated integration tests can be executed quickly and consistently.
They can cover a wide range of scenarios and functionalities within the system, ensuring that
various components work together as expected. This speed allows for faster verification of
the daily build process, identifying issues promptly.
2. Reliability: Automation eliminates human error and ensures that tests are executed
consistently. This reliability helps in accurately detecting integration issues that might arise
due to changes in the codebase.
3. Comprehensive Testing: Integration tests can encompass multiple modules or components of
a system simultaneously, checking how they interact and function together. Automated tests
can cover various integration points, which might be impractical or time-consuming to test
manually.
4. Regression Testing: With each new daily build, automated integration tests can be rerun
efficiently to check whether the changes introduced have impacted the existing functionalities
adversely. This helps catch regression issues early in the development cycle.
5. Continuous Monitoring: Automated tests can be integrated into Continuous
Integration/Continuous Deployment (CI/CD) pipelines, ensuring that integration tests are run
with each build. This continuous monitoring helps in early identification of integration issues,
promoting a more stable daily build process.
6. Faster Feedback Loop: Automated tests provide quick feedback on the status of the
integration process. If any integration failures occur, they can be reported immediately,
enabling developers to address them promptly before they escalate.
7. Documentation and Traceability: Automated integration tests serve as a form of
documentation for how different components should interact. They also provide traceability,
allowing developers to track changes and understand how modifications might affect the
integration.

20)Describe the circumstances under which you would apply white-box testing, back- box testing, or
both techniques to evaluate a COTS component.

When evaluating a Commercial Off-The-Shelf (COTS) component, the choice between white- box and
black-box testing (or a combination of both) depends on several factors, including access to the component's
internal structure, available documentation, and the testing objectives. Here's how each technique might be
applied:
White-box Testing:

1.
Access to Internal Structure:
If you have access to the source code or internal architecture of
the COTS component, white-box testing can be highly effective. It involves understanding the

Black-box Testing:
.
Limited Access to Internal Structure:
When the COTS component's internal structure is
proprietary or inaccessible, black-box testing becomes the primary option. Testers rely on the software's
external behavior, specifications, and requirements without knowledge of its internal workings.

Functional Validation: Black-box testing is ideal for evaluating the COTS component against its specified
functionalities, intended use cases, and documented requirements. This method focuses on inputs, outputs,
and system behavior rather than internal mechanisms.
User Perspective Testing: If the goal is to evaluate the component from an end-user
perspective without delving into its internal implementation details, black-box testing is more suitable. It
helps simulate real-world usage scenarios and identify usability issues.

Combined Approach (Gray-box Testing):

.
When Partial Access is Available:
In some cases, testers might have partial access to the
internal structure or some knowledge of the component's architecture. This situation can lead to a combined
approach where both white-box and black-box techniques are employed (gray- box testing).

2. Comprehensive Testing: Using both techniques in conjunction allows for a more comprehensive
evaluation. White-box testing can focus on specific critical areas or customizations, while black-box testing
ensures that the component functions correctly according to its specifications.

UNIT-3

21)Discuss the Different type Of basic Tests in detail with suitable Example

The basic tests are fundamental assessments carried out to ascertain the preliminary functioning of a
system's key features. They are designed to provide an initial overview of system readiness without delving
into exhaustive testing. Let's elaborate on each of these tests:

### Boot Tests


These tests validate the system's ability to initiate its software image from various boot options like ROM,
FLASH card, or PCMCIA. Testing these options under different configurations ensures system flexibility
and robustness during startup.

### Upgrade/Downgrade Tests


This set of tests ensures that the system can smoothly transition between software versions. Verifying that
the system can be upgraded or downgraded without complications, known as rollback, is crucial for
software management and system stability.

### Light Emitting Diode (LED) Tests


LEDs on system front panels indicate operational status. These tests validate if LEDs accurately reflect the
system and subsystem statuses. For instance, green indicating operational status, blinking green suggesting
faults, and off indicating no power. Similar checks apply to Ethernet, cable link LEDs, or other subsystem
indicators.

### Diagnostic Tests


Also known as Built-In Self-Tests (BIST), these tests assess the functionality of hardware components or
modules. Examples include:
- **Power-On Self-Test (POST):** Executes diagnostic routines during system boot, ensuring proper
hardware functionality at a high level.
- **Ethernet Loop-Back Test:** Sends and receives packets through loop-back interfaces to verify Ethernet
card functionality.
- **Bit Error Test (BERT):** Transmits known bit patterns, assessing the error rate to ensure accurate data
transmission.

### Command Line Interface (CLI) Tests


These tests validate if the system can be configured as intended through the CLI, ensuring accurate
processing of user commands and accessing relevant system information.

### Importance of Error Messages


In addition to these tests, verifying the accuracy of error messages displayed is crucial. It ensures that users
receive informative, accurate messages during system issues.

Each of these tests plays a vital role in ensuring different aspects of a system's functionality and
performance, contributing to a comprehensive evaluation before more rigorous testing phases.

22)Explain Robustness Tests in detail with suitable example?

### Boundary Value Tests


These tests focus on evaluating how a system responds to boundary conditions, extremes, or invalid inputs.
They aim to ensure the system handles input at or near its operational limits appropriately. For example, in
SNMP (Simple Network Management Protocol), testing might involve setting variables outside their
defined ranges or attempting to configure more variables than the system can handle at once. By testing
these limits, the goal is to verify that the system generates correct error messages or responds appropriately
rather than crashing or behaving unpredictably.

### Power Cycling Tests


Power cycling tests are vital to ensure the system can recover and resume normal operations after power
disruptions or outages. The tests involve intentionally interrupting the power supply to the system and then
checking if it boots up successfully. This validation helps ensure system stability and resilience against
unexpected power failures.

### On-Line Insertion and Removal Tests (OIR)


These tests evaluate the system's ability to handle the addition or removal of components (modules or cards)
while the system is running. They are particularly crucial in environments where components need to be
replaced or added without system downtime. For instance, when swapping an Ethernet card or adding a new
hardware module, the system should adapt without causing crashes or affecting ongoing operations.

### High-Availability Tests


High-availability tests are focused on ensuring the system remains operational and functional even when
components fail. They check the system's redundancy features, such as having standby modules or failover
mechanisms, to ensure seamless operation. For example, testing a standby module's ability to take over
without disruption or ensuring automatic failover from an active server to a standby server without service
interruption.

### Degraded Node Tests


These tests assess how well the system continues to function when a part of it becomes non-operational.
They aim to validate that the system can still perform its critical functions even if some components or
connections fail. For example, simulating the failure of one network connection and ensuring that the
remaining connections handle the load effectively without impacting overall service.

Each type of robustness test is designed to probe specific scenarios that might challenge the system's
stability, resilience, or recovery capabilities. These tests are essential to ensure that the system operates
reliably, even when facing unexpected or adverse conditions in its environment.
23)Discuss in detail about the Characteristics of Testable Requirements

### 1. **Testability Analysis:**


- **Commit State:** This phase involves examining requirement specifications to assess their static
behavioral characteristics, ensuring they are suitable for testing.
- **Requirement Encapsulation:** Transforming requirement descriptions into test objectives helps
outline clear, actionable test scenarios.
- **Workability Evaluation:** Reviewing test objectives ensures they are feasible and executable within
the available system and test environment.

### 2. **Review by System Test Engineers:**


- **Safety & Security:** Identification and specification of safety-critical and security-related
requirements for hazard prevention and data protection.
- **Completeness & Correctness:** Ensuring all necessary elements are covered accurately without
errors, inconsistencies, or omissions.
- **Clarity, Relevance & Feasibility:** Assessing requirements for understanding, relevance,
implementability, and clarity in communication.
- **Verifiability & Traceability:** Ensuring testability through the creation of conclusive tests and easy
traceability of requirements for reassessment.

### 3. **Functional Specification Review:**


- **Correctness:** Ensuring the functional specification aligns with external references and lacks factual
inaccuracies.
- **Extensibility & Comprehensibility:** Designing specifications to accommodate future extensions and
promoting a clear understanding of system functionality.
- **Necessity & Sufficiency:** Ensuring each item in the specification is necessary, and no critical
functions or data properties are missing or incomplete.
- **Implementability & Efficiency:** Creating specifications that can be implemented within resource
constraints and optimizing them for system performance.
- **Simplicity & Reusability:** Prioritizing simple, reusable, and modular specifications for easier
validation and component reusability.

### 4. **Consistency & Limitations:**


- **Consistency with Existing Components:** Ensuring the specification's structure aligns with the
system's existing design choices without necessitating a paradigm shift.
- **Realistic Limitations:** Defining limitations realistically in line with system requirements and
capabilities.

These characteristics collectively ensure that the specified requirements are not just comprehensive and
understandable but also practical for testing, implementation, and system validation. They form the
backbone of a system's reliability and adaptability, enabling teams to build systems that are not only
functional but also resilient to changes and errors.

24)Elaborate the various Load And Stability Tests in detail?

### Load Testing:


Load testing is about assessing how a system performs under real-life load conditions. It involves simulating
various scenarios to test the application's behavior when subjected to varying loads. Here's a detailed
breakdown:

1. **Purpose:**
- **Performance under Load:** To measure how the system behaves under normal and extreme load
conditions.

2. **Types of Load Testing:**


- **Stress Testing:** Overloading the system to observe failure points.
- **Spike Testing:** Assessing the system's response to sudden traffic spikes.
- **Soak Testing:** Evaluating the system's performance under prolonged loads.

3. **Key Parameters Measured:**


- **Throughput:** Amount of data transferred between the server and users.
- **Concurrent Users:** Number of users accessing the system simultaneously.
- **Response Time:** Time taken by the system to respond to user requests.

4. **Load Testing Process:**


- **Test Environment Setup:** Preparing a dedicated environment for load testing.
- **Scenario Creation:** Defining load scenarios and data preparation.
- **Scenario Execution:** Executing the load scenarios and gathering performance metrics.
- **Result Analysis:** Analyzing test results and making recommendations.
- **Re-testing:** Conducting tests again in case of failures for accurate results.

### Stability Testing:


Stability testing focuses on verifying system behavior over time without failure. It aims to assess software
reliability, error handling, and robustness under heavy loads:

1. **Objective:**
- **Continuous Operation:** To ensure the system functions without failure over time.

2. **Testing Parameters:**
- **Memory Usage:** Monitoring memory consumption during operation.
- **CPU Efficiency:** Assessing CPU performance and resource utilization.
- **Transaction Responses:** Verifying the system's responsiveness.
- **Disk Space Checks:** Ensuring adequate disk space availability.

3. **Stability Testing Process:**


- **Testing Objectives:** Ensuring the system remains stable under stress.
- **System Handling:** Verifying system effectiveness and resilience.
- **Error Identification:** Detecting potential system crash points.

4. **Effects of Not Performing Stability Testing:**


- **System Performance Decline:** Slowdowns due to increased data.
- **Sudden System Crashes:** Without stability testing, unexpected crashes might occur.
- **Abnormal Behavior:** System behaves unpredictably in different environments.

### Load Testing Metrics:


Metrics used to assess load testing performance:

1. **Average Response Time:** Time taken for system responses on average.


2. **Error Rate:** Percentage of errors encountered during load testing.
3. **Throughput:** Measurement of data flow between the user and server.
4. **Requests Per Second:** Rate of requests sent to the server.
5. **Concurrent Users:** Count of active users on the system at any given time.
6. **Peak Response Time:** Longest duration taken to handle requests during peak times.

Load and stability testing are crucial phases in software development as they help identify potential issues,
bottlenecks, and failure points in applications or systems under real-world conditions. These tests ensure
that systems can handle expected user loads without crashing or significant performance degradation over
time.

25.Discuss the various aspect of Documentation Testing in detail?

DOCUMENTATION TESTS

Documentation testing means verifying the technical accuracy and readability of the user manuals,
including the tutorials and the on-line help.
Documentation testing is performed at three levels as explained in the following:

Read Test: In this test a documentation is reviewed for clarity, organization, flow, and accuracy without
executing the documented instructions on the system.
Hands-On Test: The on-line help is exercised and the error messages verified to evaluate their accuracy and
usefulness.
Functional Test: The instructions embodied in the documentation are followed to verify that the system
works as it has been documented.
The following concrete tests are recommended for documentation testing:

Read all the documentations to verify

correct use of grammar,


consistent use of the terminology,
Appropriate use of graphics where possible.

Verify that the glossary accompanying the documentation uses a standard, commonly accepted terminology
and that the glossary correctly defines the terms.
Verify that there exists an index for each of the documents and the index block is reasonably rich and
complete. Verify that the index section points to the correct pages.
Verify that there is no internal inconsistency within the documentation.

Verify that the on-line and printed versions of the documentation are same.

Verify the installation procedure by executing the steps described in the manual in a real environment.
Verify the troubleshooting guide by inserting error and then using the guide to troubleshoot the error.
Verify the software release notes to ensure that these accurately describe (i) the changes in features and
functionalities between the current release and the previous ones and (ii) the set of known defects and their
impact on the customer.
Verify the on-line help for its

usability
integrity
usefulness of the hyperlinks and cross-references to related topics
effectiveness of table look-up
accuracy and usefulness of indices.

Verify the configuration section of the user guide by configuring the system as described in the
documentation.
Finally, use the document while executing the system test cases. Walk through the planned or existing user
work activities and procedures using the documentation to ensure that the documentation is consistent with
the user work.
Documentation Testing?

Documentation Testing involves testing of the documented artifacts that are usually developed before or
during the testing of Software.

Documentation for Software testing helps in estimating the testing effort required, test coverage,
requirement tracking/tracing, etc. This section includes the description of some commonly used documented
artifacts related to Software development and testing, such as:

Test Plan
Requirements
Test Cases
Traceability Matrix

26)Explain Regression testing and performance testing in detail?


Certainly! Regression testing is a crucial phase in software development aimed at ensuring that
modifications or new implementations in a software application haven't introduced new defects into existing
functionalities. Let's break down the key aspects of regression testing:

### Purpose of Regression Testing:


The primary goal of regression testing is to verify that changes made to the software, whether in the form of
bug fixes, optimizations, or new functionalities, have not negatively impacted the existing, unchanged
portions of the system.

### Scenarios Arising from Code Modifications:


When code modifications are made due to reported defects, there are four possible scenarios for each fix:
1. **Defect Fixed:** The reported defect has been successfully addressed.
2. **Defect Unfixed:** Despite efforts, the reported defect couldn't be resolved.
3. **Regression Introduced:** The reported defect is fixed, but something else that used to work now fails
due to the modification.
4. **Compounded Defect:** The reported defect remains unresolved, and something previously functional
is now failing due to the attempted fix.

### Regression Testing Process:


The process involves selecting a subset of test cases from the existing test suite to check the modified parts
of the code and the potentially affected areas. The process can be outlined as follows:
1. **Change Implementation:** Modify code for bug fixes, feature enhancements, or optimizations.
2. **Test Case Selection:** Choose appropriate test cases from the existing suite or introduce new cases if
required.
3. **Execution:** Perform regression testing using selected test cases to ensure the modifications haven't
introduced new defects.
4. **Verification:** Verify that both modified and affected parts function correctly without any regression
issues.

### Techniques for Test Case Selection:


1. **All Test Cases:** Selecting the entire test suite is simple but not always efficient.
2. **Random Test Cases:** Randomly selecting test cases is less efficient as not all tests might be equally
effective.
3. **Modification Traversal:** Focus on test cases that cover modified or affected parts of the code.
4. **Priority-Based Selection:** Assign priorities to test cases based on their effectiveness in detecting
faults or meeting customer requirements. Select higher-priority test cases for regression testing.

### Timing for Regression Testing:


Regression testing should be conducted:
- When new functionalities are added and integrated into existing code.
- After fixing identified defects to ensure the resolution didn't introduce new issues.
- When code optimizations are performed to confirm they didn't disrupt existing functionalities.

Regression testing aims to maintain software reliability by ensuring that modifications or enhancements to
the codebase don't inadvertently introduce new bugs or issues. The choice of test cases and timing of
regression testing play a critical role in detecting potential regressions while ensuring the stability and
reliability of the software.

27)Discuss the Different type Of Functionality Tests in detail with suitable Example
Functionality testing is a critical aspect of software validation, aimed at verifying that a system performs in
alignment with specified requirements. It covers various facets and subgroups to thoroughly examine and
validate different functionalities of the system. Let's break down each subgroup:

### 1. Communication Systems Tests:


- **Basic Interconnection Tests:** Ensures the system can establish primary connections.
- **Capability Tests:** Verifies that the system meets observable capabilities according to defined static
communication system requirements.
- **Behavior Tests:** Verifies dynamic communication system requirements by assessing observable
behavior.

### 2. Module Tests:


- Validates individual modules to ensure they function as intended.
- Focuses on ensuring proper functioning of modules within the entire system context.

### 3. Logging and Tracing Tests:


- Ensures proper configuration and operation of logging and tracing mechanisms.
- Includes testing of 'flight data recorder' logs post system crashes.
### 4. Element Management Systems Tests:
- Verifies management, monitoring, and upgrading functionalities of network elements by the EMS.
- Examples: Auto-Discovery, Polling and Synchronization, Fault Manager, etc.

### 5. Management Information Base (MIB) Tests:


- Verifies standard and enterprise-specific MIBs.
- Tests MIB objects with Get, GetNext, GetBulk, and Set primitives.
- Validates correct incrementation of counters and generation of traps.

### 6. Graphical User Interface (GUI) Tests:


- Validates GUI components like icons, menu bars, dialogue boxes, etc.
- Assesses usability, online help, error messages, tutorials, and user manuals.
- Tests functionality behind the interface, such as accurate response to database queries.

### 7. Security Tests:


- Validates security requirements: confidentiality, integrity, availability.
- Ensures data and processes are protected from unauthorized access, modification, and denial of service.

### 8. Feature Tests:


- Verifies additional functionalities defined in requirements.
- Examples: Data conversion tests (e.g., migration tool testing) and cross-functionality tests.

These different types of functionality tests address various aspects of the system, from its basic connections
and module functionalities to complex areas such as security and GUI usability. Each test subgroup aims to
ensure that the software behaves as expected, meeting all specified requirements and providing a seamless
user experience.
28)Explain in detail the Reliability testing & Interoperability testing in detail?

Certainly! Let's delve into detailed explanations of both Reliability Testing and Interoperability Testing:

### Reliability Testing:

**Definition:** Reliability Testing is a type of software testing aimed at assessing the system's ability to
function consistently and reliably over a prolonged period under specific conditions. It measures the
system's ability to remain operational without failure.

**Objective:** The primary objective of Reliability Testing is to identify defects or issues that can lead to
system failures over time, thus ensuring the system's consistent performance without breakdowns.

**Types of Reliability Testing:**

1. **Stress Testing:** Subjecting the system to high loads or usage to identify performance bottlenecks.
2. **Endurance Testing:** Running the system continuously for an extended period to uncover issues that
may occur over time.
3. **Recovery Testing:** Evaluating the system's ability to recover from failures or crashes.
4. **Load Testing:** Checking the system's ability to handle simultaneous users or transactions.
5. **Volume Testing:** Assessing the system's capability to handle large data volumes.
6. **Soak Testing:** Similar to endurance testing, focusing on stability under a normal load over an
extended time.
7. **Spike Testing:** Subjecting the system to sudden, unexpected increases in load.

**Measurement in Reliability Testing:**

- **Mean Time Between Failures (MTBF):** The average time elapsed between failures.
- **Mean Time To Failure (MTTF):** The average time until the system encounters a failure.
- **Mean Time To Repair (MTTR):** The average time taken to fix failures.

**Reliability Test Design Factors:**

1. **Correctness:** Ensures that the software functions correctly as per the requirements.
2. **Negative Testing:** Checking for what the system should not do.
3. **User Interface:** Verifying the interface elements for usability and design.
4. **Usability:** Testing how suitable the software is for users to achieve goals.
5. **Performance:** Evaluating system speed, load handling, stress resistance, etc.
6. **Security:** Assessing the system's ability to protect data and maintain functionality.
7. **Integration:** Testing the combined components' behavior after integration.
8. **Compatibility:** Ensuring the application works across different environments or devices.

### Interoperability Testing:

**Definition:** Interoperability Testing assesses the system's ability to interoperate or work with third-party
products or systems seamlessly. It ensures that different systems can communicate and operate together
without compromising their unique functionalities.
**Objective:** The main goal of Interoperability Testing is to validate that systems can communicate
effectively without affecting their independent functionalities.

**Key Aspects of Interoperability Testing:**

1. **Compatibility and Integration:** Ensuring systems can connect and work together seamlessly.
2. **Data Exchange:** Verifying the accurate and secure transfer of data between systems.
3. **Configuration Testing:** Testing reconfigurable aspects during interoperability.

**Approach for Interoperability Testing:**

- **Strategy Planning:** Understanding and planning for all applications involved.


- **Test Conditions Design:** Defining criteria considering the flow of data across applications.
- **Test Case Mapping:** Designing and mapping test cases to validate the conditions.
- **Non-Functional Testing:** Planning for performance, security, and other non-functional aspects.
- **Execution:** Running test cases, logging defects, re-testing, and reporting findings.
- **Review and Action:** Analyzing results, taking corrective actions, and learning from the process.

29) Explain briefly about system Test Design Factors

TEST DESIGN FACTOR


The central activity in test design is to identify inputs to and the expected outcomes from a system
to verify whether the system possesses certain features.

❖ A feature is a set of related requirements. The test design activities must be performed

in a planned manner in order to meet some technical criteria, such as effectiveness,


and economic criteria, such as productivity.

❖ Therefore, we consider the following factors during test design:

o (i) coverage metrics


o (ii) effectiveness
o (iii) productivity
o (iii) validation
o (iv) maintenance
o (v) user skill.

For designing Test Cases the following factors are considered:


1. Correctness
2. Negative
3. User Interface
4. Usability
5. Performance
6. Security
7. Integration
8. Reliability
9. Compatibility
Correctness : Correctness is the minimum requirement of software, the essential purpose of
testing. The tester may or may not know the inside details of the software module under test
e.g. control flow, data flow etc.

Negative : In this factor we can check what the product it is not supposed to do.

User Interface : In UI testing we check the user interfaces. For example in a web page we
may check for a button. In this we check for button size and shape. We can also check the
navigation links.

Usability : Usability testing measures the suitability of the software for its users, and is
directed at measuring the following factors with which specified users can achieve specified
goals in particular environments.
1. Effectiveness : The capability of the software product to enable users to achieve
specified goals with the accuracy and completeness in a specified context of use.

2. Efficiency : The capability of the product to enable users to expend appropriate


amounts of resources in relation to the effectiveness achieved in a specified context of
use.
Performance : In software engineering, performance testing is testing that is performed from
one perspective to determine how fast some aspect of a system performs under a particular
workload.

Performance testing can serve various purposes. It can demonstrate that the system needs
performance criteria.
1. Load Testing: This is the simplest form of performance testing. A load test is usually
conducted to understand the behavior of the application under a specific expected load.

2. Stress Testing: Stress testing focuses on the ability of a system to handle loads beyond
maximum capacity. System performance should degrade slowly and predictably without
failure as stress levels are
increased.

3. Volume Testing: Volume testing belongs to the group of non-functional values tests.
Volume testing refers to testing a software application for a certain data volume. This
volume can in generic terms be the database size or it could also be the size of an
interface file that is the subject of volume testing.
Security : Process to determine that an Information System protects data and maintains
functionality as intended. The basic security concepts that need to be covered by security
testing are the following:
1. Confidentiality : A security measure which protects against the disclosure of
information to parties other than the intended recipient that is by no means the only way
of ensuring

2. Integrity: A measure intended to allow the receiver to determine that the information
which it receives has not been altered in transit other than by the originator of the
information.

3. Authentication: A measure designed to establish the validity of a transmission,


message or originator. Allows a receiver to have confidence that the information it
receives originated from a specific known source.

4. Authorization: The process of determining that a requester is allowed to receive a


service/perform an operation.
Integration : Integration testing is a logical extension of unit testing. In its simplest form,
two units that have already been tested are combined into a component and the interface
between them is tested.
Reliability : Reliability testing is to monitor a statistical measure of software maturity over
time and compare this to a desired reliability goal.
Compatibility : Compatibility testing of a part of software's non-functional tests. This testing
is conducted on the application to evaluate the application's compatibility with the computing
environment. Browser compatibility testing can be more appropriately referred to as user
experience testing. This requires that the web applications are tested on various web browsers
to ensure the following:
Users have the same visual experience irrespective of the browsers through which they view
the web application.
In terms of functionality , the application must behave and respond the same across various
browsers.

30. How will you identify Requirement for develop any kind of applications and
explain various states available for selecting a requirement?
The provided information gives a comprehensive view of the Requirement Identification process, focusing
on the life cycle of requirements within an organization. Here's a breakdown of the key elements discussed:

### Objective of Requirement Identification:


- Requirements represent user needs or desires that a system is expected to fulfill.
- Challenges involve capturing the right requirements accurately and ensuring unambiguous communication
between users and development/testing teams.

### Requirement Life Cycle:


- **State Diagram:** Illustrates the life cycle of a requirement from its submission to closure.
- **States:** The requirement progresses through states like submit, open, review, assign, commit,
implement, verification, and closed.
- **Actions:** Each state involves specific actions by the owner (submitter, marketing manager, director of
software engineering, test manager, etc.).

### Requirement Schema and Field Summary:


- Defines the fields associated with each requirement state (e.g., requirement ID, priority, title, submitter,
description, etc.).

### States and Actions in Detail:


- **Submit State:** Entry point for a new requirement, filled with necessary details like priority, title,
description, submitter, etc.
- **Open State:** Managed by the marketing manager, involves reviewing, reevaluating priority,
suggesting software releases, etc.
- **Review State:** Under the director of software engineering, involves review, estimation, creating
functional specifications, etc.
- **Assign State:** Managed by the marketing manager, assigns requirements to specific software releases.
- **Implement State:** Under the director of software engineering, implies coding and unit testing of the
requirement.
- **Verification State:** Under the test manager, verifies the requirement using different methods (testing,
inspection, analysis, demonstration), updates fields accordingly.
- **Closed State:** Marks the requirement as closed after verification by the test manager.
- **Decline State:** Requirement declined due to various reasons, managed by the marketing department,
can result from technical infeasibility, customer discussions, etc.

### Data Management:


- **Fields Updates:** Each state involves updating relevant fields related to priority, description, notes,
attachments, test case identifiers, results, defects, etc.
- **EC (Engineering Change) Number:** Assigned when a requirement faces technical challenges or
changes.

### Ownership Transition:


- The ownership of a requirement transitions as it moves through different states, ensuring specific
responsibilities at each stage.

### Purpose of Each State:


- Each state represents a phase in the requirement's life cycle, involving specific reviews, evaluations, tests,
and verifications.

### System Implementation:


- Utilizes a database system with a graphical user interface (GUI) for effective management of requirement
transitions and updates.

This structured approach ensures that requirements are captured, reviewed, tested, and verified
systematically to avoid misinterpretations, meet user expectations, and facilitate smooth system
development and delivery.

UNIT-4
31)Illustrate the various Stages that can be adapted on framing Structure of a System Test Plan?
Structure of a System Test Plan
A good plan for performing system testing is the cornerstone of a successful software project. In the absence
of a good plan it is highly unlikely that the desired level of system testing is performed within the stipulated
time and without overusing resources such as manpower and money.
Moreover, in the absence of a good plan, it is highly likely that a low-quality product is delivered even at a
higher cost and later than the expected date.
The purpose of system test planning, or simply test planning, is to get ready and organized for test
execution. Starting a system test in an ad hoc way, after all the modules are checked in to the version control
system, is ineffective.
Working under deadline pressure, people, that is, test engineers, have a tendency to take shortcuts and to
“just get the job done,” which leads to the shipment of a highly defective product.
Consequently, the customer support group of the organization has to spend a lot of time in dealing with
unsatisfied customers and be forced to release several patches to demanding customers.
Test planning is essential in order to complete system testing and ship quality product to the market on
schedule.
Planning for system testing is part of overall planning for a software project. It provides the framework,
scope, resources, schedule, and budget for the system testing part of the project.
Test efficiency can be monitored and improved, and unnecessary delay can be avoided with a good test plan.
The purpose of a system test plan is summarized as follows: system test plan outline.

Introduction
Feature description
Assumptions
Test approach
Test suite structure

Test environment
Test execution strategy
Test effort estimation
Scheduling and milestones
It provides guidance for the executive management to support the test project, thereby allowing them to
release the necessary resources to perform the test activity.
It establishes the foundation of the system testing part of the overall software project.
It provides assurance of test coverage by creating a requirement traceability matrix.
It outlines an orderly schedule of events and test milestones that are tracked.
It specifies the personnel, financial, equipment, and facility resources required to support the system testing
part of a software project.
The activity of planning for system testing combines two tasks: research and estimation. Research allows us
to define the scope of the test effort and resources already available in-house.
Each major functional test suite consisting of test objectives can be described in a bounded fashion using the
system requirements and functional specification as references.
A system test plan is outlined in above Table.
The test plan is released for review and approval after the author, that is, the leader of the system test group,
completes it with all the pertinent details.
The review team must include software and hardware development staff, customer support group members,
system test team members, and the project manager responsible for the project.
The author(s) should solicit reviews of the test plan and ask for comments prior to the meeting.
The comments can then be addressed at the review meeting.
The system test plan must be completed before the software project is committed.

32)Discuss in detail the various Test Execution Strategy methodologies?

The Test Execution Strategy is a critical aspect of system testing, ensuring a systematic approach to
executing test cases, handling failures, and progressing towards a desired quality level. Here's a detailed
breakdown:

### Overview of Test Execution Strategy:


- **Purpose:** Ensures effective system-level testing within defined timelines without excessive resource
usage.
- **Addressed Concerns:** Frequency of test case execution, handling failed test cases, managing multiple
failures, test case order, and final test runs.

### Sequential Test Strategy:


- **Evolving System Concept:** System characterized by a sequence of builds with fewer defects in
subsequent builds.
- **Strategy:** Run test suite T on each build, identify defects, create a new build with fixes, iterate until
desired quality level is attained.
- **Condition:** Assumes independent execution of test cases with no blocking due to system defects.

### Complexities and Realities:


- **Interdependencies:** Some test cases are linked to detected and fixed defects.
- **Defect Fix Challenges:** Fixing one defect may introduce new defects.
- **Incremental Testing:** Full test suite not always required for regression testing; a subset may suffice.

### Parameterized Test Cycle:


- **Characteristics:** Each cycle defined by six parameters: goals, assumptions, test execution, actions,
revert and extension criteria, exit criteria.
- **Controlled Progress:** Allows measured, controlled progress towards the final quality level.

### Multicycle Test Strategy:


- **Multiple Cycles:** Divides the testing into distinct cycles, aiming to improve system quality
progressively.
- **Defect Fix Analysis:** Root cause analyses by development and system test teams essential for
progress evaluation.
### Characterization of Test Cycles:
- **Goal Setting:** Ideal high-standard goals set by the test team for each cycle.
- **Test Prioritization:** Prioritization based on test case failure history, test group membership, and
software component associations.
- **Entry and Exit Criteria:** Conditions defined for starting and completing a test cycle.

### Test Case Selection:


- **Partitioning Test Suite:** Divides test cases into red (must execute), yellow (useful to execute), green
(no value in regression), and white (unspecified) bins.
- **Regression Selection:** Based on failed test cases, test group RCA, and specific test group inclusions.
- **Prioritization:** Orders test cases for execution based on prioritization principles for each test cycle.

### Test Cycle Progression:


- **Cycle 1:** Prioritizes test cases to maximize execution without blocking.
- **Cycle 2:** Verifies fixes from Cycle 1, focuses on failed test cases, reassigns test cases based on
engineer expertise.
- **Cycle 3:** Similar prioritization but applied to a selected subset for regression testing.

### Challenges and Considerations:


- **Defect Lifecycle:** Understanding the stages of a known defect aids in managing test cycles.
- **Entry Criteria:** Specific conditions must be met before initiating the first test cycle.
- **Handling Unexpected Events:** Strategies for too many failed test cases or designing new ones during
a cycle.

This detailed test execution strategy addresses various aspects of handling system testing, ensuring
controlled progress, efficient resource usage, and measured improvements in the system's quality across
multiple cycles.
33)Explain detail how the test Environment is created and accessed?
Designing a test environment, often constrained by budget and resources, requires innovative thinking to
effectively fulfill testing objectives. Here's a comprehensive breakdown of strategies, challenges, and steps
involved:

### Test Environment Design Principles:


- **Efficiency Over Scale:** Aim to achieve effective testing with limited resources.
- **Innovative Solutions:** Use simulators, emulators, and traffic generation tools for cost-effective
alternatives.
- **Multiple Environments:** Necessary for scalability testing and reducing testing duration.
### Challenges and Considerations:
- **Complex Systems:** Testing distributed systems and networks involves various interconnected
equipment.
- **Time and Planning:** Setting up effective test beds for large systems requires meticulous planning and
procurement.

### Key Components of Test Environment Design:


1. **Planning and Preparation:**
- Review system requirements and functional specifications to understand system nuances.
- Participate in review processes to identify potential issues in migrating from development to
deployment.
- Document findings and prepare for creating a test bed.

2. **Information Gathering:**
- Gather information on customer deployment architecture, including hardware, software, and
manufacturers.
- Obtain lists of third-party products, tools, and software for integration and interoperability testing.
- Identify hardware requirements for specialized features and new project hardware.
- Analyze test objectives (functional, performance, stress, load, scalability) to identify necessary
environment elements.
- Define security requirements to prevent disruptions during tests.

3. **Equipment Identification:**
- List necessary networking equipment like switches, hubs, servers, and cables for setting up the test lab.
- Identify accessories required for testing, such as racks, vehicles, and shielding to prevent interference.

4. **Designing Test Beds:**


- Develop schematic diagrams and high-level architectures for test beds.
- Create tables specifying equipment types, quantities, descriptions, and layout details.

5. **Equipment Procurement:**
- Review available in-house equipment and identify items that need to be procured.
- Create a test equipment purchase list with quantities, unit prices, maintenance costs, and justifications.
- Obtain quotes from suppliers for accurate pricing and finalize procurement orders.

6. **Implementation and Tracking:**


- Monitor equipment acquisition and installation in line with the approved budget and project schedule.
- Ensure activities remain on track to meet overall project timelines.

### Procurement Justification:


- **Key Questions to Answer:** Why is the equipment needed? What's the impact of not having it? Are
there alternatives?
- **Justification Process:** Outline the necessity and impact of each item on system testing quality and
time-to-market schedule.

In summary, designing a test environment involves comprehensive planning, information gathering,


equipment identification, procurement justification, and meticulous tracking to ensure effective system-level
testing within budget and resource constraints.

34)Explain in detail the various Scheduling and Test Milestones in software testing?
Scheduling system testing involves meticulous planning and coordination to ensure efficient task execution
and timely achievement of milestones. Here's a step-by-step breakdown of effective scheduling for a test
project:

### Steps for Effective Test Project Scheduling:


1. **Task List Development:**
- Create a detailed list of tasks including procurement, test environment setup, test case creation,
execution, and report writing.

2. **Milestone Identification:**
- List major milestones including reviews, completion of test plans, test case creation, environment setup,
and system test cycles.

3. **Identify Interdependencies:**
- Understand how tasks and software milestones influence each other's flow.

4. **Resource Identification:**
- List and categorize available resources like human resources, hardware/software, their expertise,
availability, and capacity.

5. **Resource Allocation:**
- Allocate resources to each task considering availability and expertise required.

6. **Task Scheduling:**
- Schedule start and end dates of each task, considering dependencies and available resources.

7. **Milestone Insertion:**
- Insert remaining milestones into the schedule.

8. **Determine Task Durations:**


- Establish earliest and latest start and end dates considering task interdependencies.

9. **Assumptions Documentation:**
- Document assumptions like hiring, equipment availability, and space requirements.

10. **Schedule Review and Iteration:**


- Review and iterate the schedule based on changes in assumptions or resource availability.

11. **Feedback and Iteration:**


- Review the schedule with the test team and gather feedback for further iterations.

### Gantt Chart:


- **Representation:** Graphical representation showing task durations, dependencies, and order.
- **Purpose:** Assess project duration, task order, resource allocation, and progress monitoring.
- **Structure:** Each task takes one row; dates run along the top with horizontal bars representing task
duration.
- **Complex Projects:** May require subordinate charts detailing subtasks of main tasks.

35)Discuss Feature Description, Assumptions needed for developing test cases?

### Introduction:
- **Objective:** Describes the structure and objectives of the test plan.
- **Components:**
- **Test Project Name:** Identifies the project.
- **Revision History:** Tracks changes made to the plan.
- **Terminology and Definitions:** Clarifies any ambiguous terms.
- **Approvers' Names and Approval Date:** Indicates authorization.
- **References:** Lists any documents used as references.
- **Summary:** Provides an overview of the test plan's content.

### Feature Description:


- **Objective:** Summarizes the system features to be tested.
- **Content:** High-level overview of the functionalities being tested within the system.

### Assumptions:
- **Purpose:** Describes areas where test cases might not be designed due to specific reasons:
- **Equipment Availability:** Constraints related to scalability, third-party equipment procurement,
regulatory compliance, and environmental testing.
- **Importance:** These assumptions are crucial considerations while reviewing the test plan.

### Test Case Parameters:

1. **Module Name:** Title defining the functionality being tested.


2. **Test Case ID:** Unique identifier for each condition.
3. **Tester Name:** Assigns responsibility for test execution.
4. **Test Scenario:** Offers a brief overview of what needs to be tested.
5. **Test Case Description:** Describes the specific condition to be checked (e.g., validation of certain
input data).
6. **Test Steps:** Enumerates the steps to be performed for the check.
7. **Prerequisite:** Conditions required before starting the test process.
8. **Test Priority:** Indicates the importance or sequence of execution.
9. **Test Data:** Inputs required for the test case.
10. **Test Expected Result:** Anticipated output after test execution.
11. **Test Parameters:** Specific parameters assigned to the test case.
12. **Actual Result:** Outcome observed at the end of the test.
13. **Environment Information:** Details the test environment (OS, software versions, etc.).
14. **Status:** Records the result of the test (pass, fail, NA, etc.).
15. **Comments:** Provides additional remarks for clarity or improvements.

### When Test Cases are Written:


- **Before Development:** To identify software requirements.
- **After Development:** Before product/software launch to verify features.
- **During Development:** Parallel to coding to test as modules are built.
Test cases are critical in ensuring software functions as expected and adheres to requirements. They outline
specific conditions, expected outcomes, and other pertinent details for systematic testing at various stages of
software development.

36)Discuss the Test Suite Structure needed for developing test case in detail?

A test suite serves as a container for organizing and managing multiple test cases that are related to one
another. Here are the key points about test suites and how they function:

### Test Suite Definition:


- **Purpose:** Organizes related test cases for efficient execution and reporting.
- **States:** Can be in three states: Active, In Progress, or Completed.
- **Composition:** Contains multiple test cases grouped together based on various criteria.
- **Association:** Test cases can be included in multiple test suites and test plans.

### Grouping Criteria for Test Suites:


1. **Test Type:** Differentiates tests based on their purpose (e.g., integration, unit, regression).
2. **Execution Time:** Classifies tests by their speed of execution (slow, medium, fast).
3. **Modules or Areas:** Organizes tests based on the specific features or modules of the application they
target.

### Test Suite in Practice:

#### Manual Test Suite:


- **Structure:** Can be as simple as a folder with word documents or a more sophisticated approach.
- **Example:** A manual test suite for a shopping cart application may include various test cases related to
cart functionality like adding, removing, or updating items.

#### Automated Test Suite:


- **Structure:** In automated testing frameworks like Jest, a test suite is often represented by code artifacts.
- **Example:** Using Jest, a test suite could be defined using the `describe` function to group test cases
related to specific functionalities or scenarios within the code.

### Test Suite Example:


```javascript
// Jest example of an automated test suite
describe('Shopping Cart Tests', () => {
test('Adding a single product to an empty cart', () => {
// Verify cart functionality for adding the first item
});

test('Adding two products to an empty cart', () => {


// Verify cart functionality for adding multiple items
});

test('Removing an existing product from the cart', () => {


// Verify cart functionality for removing items
});

test('Adding two instances of the same product to the cart', () => {


// Verify cart functionality for adding duplicate items
});
});
```

### Summary:
Test suites are a vital part of organizing test cases and streamlining the testing process. They help testers
efficiently manage and execute tests by grouping related functionalities, scenarios, or modules together.
Whether manual or automated, test suites aim to ensure comprehensive test coverage and maintainability of
test cases within a project.
37) Discuss the various aspects that can follow in test approach?

### 1. Objective:
This section sets the overall aim of the testing plan, detailing the intended procedures and methodologies to
ensure the delivery of high-quality software. It comprises:
- **Functionality and Performance Testing:** Identifying what features and aspects of the application will
undergo testing.
- **Goal Setting:** Establishing specific objectives and targets based on the application's features and
functionalities.
### 2. Scope:
- **In-Scope:** Clearly defines the modules or functionalities that require rigorous testing.
- **Out of Scope:** Specifies the areas or modules exempted from intensive testing efforts.
- **Example:** Illustrates scenarios like purchasing modules from external sources and their limited testing
scope.

### 3. Testing Methodology:


- **Definition:** Outlines the methods or approaches employed for testing, which may vary based on the
application's requirements.
- **Clarification:** Clearly defines the types of testing methodologies used, ensuring comprehension across
the team.

### 4. Approach:
- **High-Level Scenarios:** Defines crucial scenarios representing critical features for testing.
- **Flow Graph:** Utilized to streamline processes or depict complex interactions for efficient
understanding.

### 5. Assumptions:
- **Explanation:** Outlines assumptions made concerning the testing process.
- **Example:** Assures cooperation and support from the development team or adequate resource
allocation.

### 6. Risks:
- **Identify Risks:** Recognition of potential risks related to assumption failures.
- **Reasons:** Explains factors causing risks, such as managerial issues or resource scarcity.

### 7. Mitigation Plan:


- **Strategies:** Lays out actions or strategies to mitigate or avoid identified risks.
- **Examples:** Prioritizing test activities, managerial training, or contingency plans.

### 8. Roles and Responsibilities:


- **Designation:** Clearly defines the roles and responsibilities of each team member involved in testing.
- **Examples:** Test Manager overseeing the project, Testers carrying out test activities, etc.

### 9. Schedule:
- **Timeline:** Provides a detailed timeline indicating the start and end dates of various testing-related
activities.
- **Example:** Marks periods for test case writing, execution, and reporting.

### 10. Defect Tracking:


- **Process:** Describes the process of capturing, prioritizing, and communicating defects found during
testing.
- **Tools:** Mentions the specific defect-tracking tools like Jira or Mantis used for efficient defect
management.

### 11. Test Environments:


- **Configuration:** Details the hardware and software environments utilized for testing purposes.
- **Examples:** Different operating systems, hardware configurations, or network setups.

### 12. Entry and Exit Criteria:


- **Conditions:** Clearly defines the criteria required for the commencement and conclusion of testing
phases.
- **Entry:** Specifies prerequisites like resource readiness and test data availability.
- **Exit:** Outlines conditions for ending the testing phase, like resolving major bugs or fulfilling test case
execution.

### 13. Test Automation:


- **Definition:** Determines which application features or aspects will be automated or manually tested.
- **Criteria:** Based on factors like bug frequency, repetitiveness of tests, and their suitability for
automation.

### 14. Effort Estimation:


- **Planning:** Involves estimating the effort or workload required from each team member.
- **Allocation:** Allocates tasks based on the estimations made for an equitable distribution of work.

### 15. Test Deliverables:


- **Output:** Identifies the various reports and documents produced by the testing team throughout and at
the end of the project.
- **Before Testing:** Includes documents like the test plan, test case documents, etc.
- **During Testing:** Consists of artifacts like test scripts, logs, or interim reports.
- **After Testing:** Comprises final test reports, defect reports, installation reports, etc.

### 16. Templates:


- **Standardization:** Prescribes standardized templates for various reports and documents.
- **Consistency:** Ensures uniformity and consistency across different documents created during the
project, aiding in readability and understanding.

Each component serves a specific purpose in planning, executing, and reporting on the testing process,
contributing to the overall success of the software testing effort.
39. Write in detail about system test

planning?

Test Plan
A test plan is a detailed document which describes software testing areas and activities. It outlines the test
strategy, objectives, test schedule, required resources (human resources, software, and hardware), test
estimation and test deliverables.

The test plan is a base of every software's testing. It is the most crucial activity which ensures availability of
all the lists of planned activities in an appropriate sequence.

The test plan is a template for conducting software testing activities as a defined process that is fully
monitored and controlled by the testing manager. The test plan is prepared by the Test Lead (60%), Test
Manager(20%), and by the test engineer(20%).

Types of Test Plan

There are three types of the test plan

Master Test Plan

Phase Test Plan


Testing Type Specific Test Plans

Master Test Plan

Master Test Plan is a type of test plan that has multiple levels of testing. It includes a complete test strategy.

Phase Test Plan


A phase test plan is a type of test plan that addresses any one phase of the testing strategy. For example, a
list of tools, a list of test cases, etc.

Specific Test Plans

Specific test plan designed for major types of testing like security testing, load testing, performance testing,
etc. In other words, a specific test plan designed for non-functional testing.

How to write a Test Plan

Making a test plan is the most crucial task of the test management process. According to IEEE 829, follow
the following seven steps to prepare a test plan.

First, analyze product structure and architecture.


Now design the test strategy.
Define all the test objectives.
Define the testing area.
Define all the useable resources.
Schedule all activities in an appropriate manner.
Determine all the Test Deliverables.

40)Discuss in detail about test automation?

Absolutely, let's delve deeper into various aspects of automated testing:

### Introduction to Automated Testing


Automated testing involves the use of scripts or software tools to execute pre-defined test cases against a
software application. It reduces manual effort, speeds up testing, and enhances accuracy and repeatability in
evaluating software functionality.

### Reasons to Shift from Manual to Automated Testing

#### Quality Assurance


- **Increased Test Coverage:** Automated tests can cover a broader range of scenarios compared to
manual testing, improving overall software quality.
- **Error Detection:** It efficiently identifies bugs and errors, aiding in their timely resolution.
- **Consistency:** Automated tests execute in a consistent manner, reducing human-induced
inconsistencies.

#### Time and Cost Efficiency


- **Faster Test Execution:** Automation speeds up test execution, reducing time and resource
requirements.
- **Reduced Manpower:** Requires fewer testers for executing repetitive test cases, saving human
resources.

#### Repeatability and Reusability


- **Repetitive Task Handling:** Automated tests handle repetitive tasks with ease, allowing for frequent
execution without manual intervention.
- **Reusable Scripts:** Test scripts can be reused across different phases of development and for regression
testing.

### Differences Between Manual and Automated Testing

#### Reliability
- **Manual Testing:** Vulnerable to human error, leading to less reliability.
- **Automated Testing:** More reliable due to consistent and repeatable test execution.

#### Time and Resource Investment


- **Manual Testing:** Demands significant investment in terms of human resources.
- **Automated Testing:** Initial investment in tool selection and script creation, but lower ongoing
resource requirements.

#### Programming Knowledge


- **Manual Testing:** Does not require programming skills for test execution.
- **Automated Testing:** Requires programming knowledge to script and execute tests.

#### Test Coverage and Regression Testing


- **Manual Testing:** Difficult to achieve complete test coverage and comprehensive regression testing.
- **Automated Testing:** Facilitates comprehensive test coverage and efficient regression testing.

### Types of Automation Testing

#### Unit Testing


- Tests individual code units or modules in isolation to ensure their functionality.

#### Integration Testing


- Checks the interaction and integration of different components/modules to identify issues.

#### Smoke Testing


- Verifies if the basic functionalities work correctly before more in-depth testing.

#### Performance Testing


- Evaluates system performance and stability under specific conditions and loads.

#### Regression Testing


- Ensures that recent code changes haven’t adversely affected existing features.

#### Security Testing


- Identifies vulnerabilities and loopholes in the software's security mechanisms.

#### Acceptance Testing


- Validates whether the application meets end-users' expectations and approval criteria.

#### API Testing


- Validates the Application Programming Interface (API) functionality and reliability.
#### UI Testing
- Verifies that the user interface elements and functionalities work as intended.

### Criteria for Test Automation

#### Monotonous and Repetitive Tests


- Test cases that require repeated execution can be automated for efficiency.

#### Tests Requiring Multiple Data Sets


- Automation can efficiently handle tests requiring various data sets for validation.

#### Business-Critical Tests


- High-risk or critical test cases can be automated for accurate and regular execution.

#### Determinant and Tedious Tests


- Test cases with deterministic pass/fail criteria or involving repetitive tasks suit automation.

### Automation Testing Process

#### Tool Selection


- Criteria like ease of use, browser support, flexibility, analysis, and cost guide the selection.

#### Defining Scope


- Ensuring the automation framework supports required aspects like maintenance and returns on investment.

#### Planning, Design, and Development


- Installation of frameworks, libraries, and scripting using suitable automation tools.

#### Test Execution


- Running automated test scripts using chosen automation frameworks or tools.

#### Maintenance
- Documenting and updating test results, scripts, and reports for future reference.

This is an extensive overview of the fundamentals, differences, types, and the process involved in
automated testing, emphasizing its benefits and key considerations.

UNIT-5

41.What is the Preparedness that is needed to Start System Testing?

Sure, let's break down the details and significance of the Test Execution Working Document outline
provided in Table 5.1:
### 1. Test Engineers Section
- **Purpose:** Lists names, availability, and expertise of test engineers on the project.
- **Significance:** Aids in resource allocation and identifies areas where additional expertise may be
required.

### 2. Training Requirements


- **Purpose:** Documents training needs for successful project execution.
- **Significance:** Ensures team members have necessary skills, avoiding gaps in knowledge or
capabilities.

### 3. Test Case Allocation


- **Purpose:** Assigns test cases based on expertise and interest of test engineers.
- **Significance:** Helps in effective utilization of team members' skills and interests for efficient testing.

### 4. Test Bed Allocation


- **Purpose:** Assigns test beds to test engineers, ensuring preparedness before test cycles.
- **Significance:** Maximizes execution efficiency by minimizing downtime and ensuring continuous test
bed availability.

### 5. Automation Progress


- **Purpose:** Tracks progress in automating test cases, particularly for regression tests.
- **Significance:** Determines availability of automated test cases for execution, reducing manual effort
and time.

### 6. Projected Test Execution Rate


- **Purpose:** Projects weekly execution of test cases against actual execution during the cycle.
- **Significance:** Offers a clear visual representation of testing progress and identifies potential delays.

### Execution of Failed Test Cases


- **Purpose:** Outlines strategies for executing failed test cases and verifying defect fixes.
- **Significance:** Provides a structured approach for retesting failed test cases to ensure defect resolution.

### Development of New Test Cases


- **Purpose:** Specifies the strategy for writing new test cases during test case execution.
- **Significance:** Guides the development of additional test cases if needed during the testing process.

### Trial Software Image Usage


- **Purpose:** Allows early access to software images for test engineers.
- **Significance:** Facilitates familiarization with the system, validation of test steps, and ensures
operational test beds.

### Schedule of Defect Review Meeting


- **Purpose:** Establishes a meeting schedule for defect reviews during the test cycle.
- **Significance:** Ensures timely identification and resolution of defects by involving cross-functional
team members.

This document structure outlines essential elements required for effective test execution management. It
focuses on resource allocation, skill enhancement, test case allocation, automation progress, test bed
readiness, defect resolution, and collaboration among team members throughout the testing lifecycle.

42) Discuss the various Metrics needed for Tracking System Testing?

Metrics for Tracking System Test

Sure, let's delve into the details of metrics used to track system test execution and defect monitoring in
software development projects:

### Metrics for Monitoring Test Execution


1. **Test Case Execution Progress:** Tracking the number of test cases executed against the planned
number provides insights into testing progress.

2. **Test Case Pass/Fail Ratio:** Analyzing the ratio of passed to failed test cases helps in assessing the
system's stability and identifying areas needing attention.

3. **Test Coverage:** Measures the extent to which the software's functionality has been tested. It includes
code coverage and functional coverage.

4. **Test Cycle Duration:** Measures the time taken to complete a test cycle, indicating the efficiency of
testing activities.

5. **Defect Discovery Rate:** Tracks the rate at which defects are being discovered during testing,
providing insight into the software's quality.

### Metrics for Monitoring Defects


1. **Defect Density:** Calculated as the number of defects per unit size of code or test cases. It indicates
the severity of issues in the system.

2. **Defect Aging:** Tracks how long defects remain unresolved, helping identify bottlenecks in the defect
resolution process.

3. **Severity Distribution:** Categorizes defects based on severity levels (critical, major, minor) to
prioritize resolution efforts.
4. **Defect Reopen Rate:** Measures the rate at which previously closed defects are reopened, indicating
the effectiveness of fixes.

5. **Root Cause Analysis:** Tracks the root causes of defects to identify recurring issues and implement
preventive measures.

### Importance and Application of Metrics


1. **Project Monitoring:** Metrics provide real-time visibility into the progress of testing, enabling timely
interventions and decisions.

2. **Risk Mitigation:** Early detection of high defect density or failing test cases allows for proactive
measures to address issues, reducing project risks.

3. **Decision-Making:** Helps management in making informed decisions regarding test cycle


continuation, resource allocation, and prioritization of defects.

4. **Process Improvement:** Post-project analysis using collected metrics aids in identifying areas for
process enhancement and future project planning.

5. **Efficiency Improvement:** Allows for optimization of test execution by identifying areas of


improvement, increasing overall testing efficiency.

By actively monitoring these metrics during system testing, management gains a clear understanding of the
project's health, enabling them to make informed decisions, take corrective actions, and ensure the
successful delivery of a high-quality product to the customer.

43) Illustrate the different Metrics for Monitoring Defect Reports in detail? Metrics for Monitoring
Test Execution

### Metrics for Monitoring Test Execution

1. **Test Case Escapes (TCE):**


- **Definition:** Test engineers design new test cases during testing (test case escapes). Monitoring the
increase in such escapes signals potential issues in test design, impacting project schedules.

2. **Planned versus Actual Execution (PAE) Rate:**


- **Definition:** Compares planned versus actual execution of test cases weekly. A significant deviation
may require preventive actions to avoid project delays.

3. **Execution Status of Test (EST) Cases:**


- **Definition:** Tracks the number of test cases in different states (failed, passed, blocked, invalid,
untested) post-execution, categorized by test types (basic, functionality, robustness).

### Examples from Real-life Test Project (Bazooka)


- **PAE Metric:** An example shows the planned execution of 1592 test cases for a 14-week cycle,
highlighting actual execution against projections.
- **Execution Rate:** Initial lower execution rates took three weeks for engineers to grasp the system,
resulting in a 16-week cycle instead of the planned 14.

### Monitoring Defect Reports

1. **Function as Designed (FAD) Count:**


- **Definition:** Tracks defects mistakenly reported due to misunderstandings. Higher FAD counts imply
lower system understanding by test engineers.

2. **Irreproducible Defects (IRD) Count:**


- **Definition:** Tracks defects that cannot be replicated after reported. Reproducibility aids developers
in understanding and fixing issues.

3. **Defects Arrival Rate (DAR) Count:**


- **Definition:** Quantifies the inflow of defect reports from various sources (system testing, software
development, SIT groups, etc.), reflecting different objectives.

4. **Defects Rejected Rate (DRR) Count:**


- **Definition:** Measures the rate at which reported defects are rejected post-fix attempts by the
development team.

5. **Defects Closed Rate (DCR) Count:**


- **Definition:** Indicates the efficiency of defect resolution verification by the testing group
post-claimed fix, ensuring resolved defects.

6. **Outstanding Defects (OD) Count:**


- **Definition:** Quantifies defects that persist despite being reported, signaling unresolved issues.

7. **Arrival and Resolution of Defects (ARD) Count:**


- **Definition:** Useful for comparing defect arrival rates against their resolution rates, providing
insights into defect handling efficiency.

### Importance and Application of Metrics


- **Project Health:** Metrics aid in gauging test progress, system understanding, defect inflow, and
resolution efficiency.

- **Decision-Making:** Enables informed decisions, corrective actions, and resource allocation to ensure
project success and quality product delivery.

By actively tracking these metrics, project teams gain visibility into test progress, system understanding,
defect trends, and resolution efficiency, empowering them to steer projects effectively, mitigate risks, and
maintain product quality.
44) Explain in detail about System Test Report
The structure and contents of a Final System Test Report are crucial for summarizing the test project's
outcomes and providing insights into the testing process. Let's dive into the different sections and elements
that compose this report:

### Introduction to the Test Project Section:


- **Purpose:** Summarizes the objective of the report, including project name, software image details,
revision history, terminology, testing staff, scope, and references.

### Summary of Test Results Section:


- **Test Cycles Summary:** Provides an overview of each test cycle, including start/completion dates,
counts of passed/failed/invalid/blocked/untested test cases, and reasons for untested cases (e.g., equipment
unavailability).

- **Defects Summary:** Details the number of defects filed, their different states (e.g., irreproducible,
FAD, closed, shelved, duplicate), postponed, assigned, and open, providing a comprehensive view of defect
statuses.

### Test Objective Section:


- **Objective Description:** Describes the testing types executed (e.g., functional, regression,
performance), outlining the purpose of each type. Regression testing's primary objective is usually
emphasized, focusing on defect discovery post-new feature code addition.

### Test Cases, Test Coverage, and Execution Details:


- **Test Suite Explanation:** Describes the executed test types, their location, timing, and details about the
number of tests, their outcomes (passed/failed/skipped), and code coverage.

- **Execution Details:** Records who, when, and where the code was tested, providing insights into the
execution process.

### Defect Counts:


- **Defect Documentation:** Highlights found defects, describing them briefly for post-testing analysis.
Prioritizes defects based on severity to aid in decision-making regarding release readiness.

### Platform and Test Environment Configuration Details:


- **Configuration Information:** Describes the test environment, platform details, and any nonstandard
configurations. However, sensitive information about the application's code or servers should be handled
cautiously due to security concerns.

- **Impact Analysis:** If any configuration changes during testing led to defects, this section covers how
such changes affected the defects, testing scope, and coverage.

### Overall Importance and Purpose:


- **Decision-Making and Analysis:** The report helps stakeholders make informed decisions about product
readiness for release by providing a comprehensive view of test outcomes, defect statuses, and execution
details.

- **Documentation for Future Reference:** Serves as a reference for future projects, aiding in
understanding testing methodologies and outcomes.

The report aims to present a consolidated view of the test project's execution, outcomes, and defect statuses
to facilitate decision-making and future planning.

45)Discuss the Five Views of Software Quality in detail?

Software Quality:

Quality is a complex concept—it means different things to different people, and it is highly context
dependent.

Five views of quality:


Transcendental View: It envisages quality as something that can be recognized but is difficult to define. The
transcendental view is not specific to software quality alone but has been applied in other complex areas of
everyday life.

User View: It perceives quality as fitness for purpose. According to this view, while evaluating the quality
of a product, one must ask the key question: “Does the product satisfy user needs and expectations?”

Manufacturing View: Here quality is understood as conformance to the specification. The quality level of a
product is determined by the extent to which the product meets its specifications.

Product View: In this case, quality is viewed as tied to the inherent characteristics of the product.

Value-Based View: Quality, in this perspective, depends on the amount a customer is willing to pay for it.

46)Briefly explain the various stages in Product Sustaining in detail?


The Sustaining phase in software development, which follows the General Availability (GA) declaration of
a product, focuses on maintaining and improving software quality throughout its market life. This phase
involves various maintenance activities aimed at addressing defects, adapting to changes, and enhancing
functionalities:

### Types of Software Maintenance Activities:


1. **Corrective Maintenance:** Involves isolating and fixing defects reported in the software. When a
defect comes in from a user through customer support, the sustaining team—comprising developers and
testers—immediately works on it if it's deemed corrective. Progress updates are sent to the customer within
24 hours, and the team continues until a patch with the fix is released.
2. **Adaptive Maintenance:** Modifies the software to interface with changes, such as new hardware
versions or third-party software updates. If the reported issue is adaptive or perfective, it is logged into the
requirement database and undergoes regular software development phases.

3. **Perfective Maintenance:** Enhances the software by adding new functionalities or improving existing
ones.

### Sustaining Team's Responsibilities:


1. **Handling Reported Defects:**
- Developers and testers address corrective issues immediately.
- Adaptive or perfective issues go through regular development phases.

2. **Training and Transition:**


- Sustaining engineers receive adequate training about the product before transitioning.
- Engineers are rotated between system testing and sustaining groups for varied experience.

3. **Sustaining Test Engineers' Tasks:**


- Interact with customers to understand real-world environments and issues.
- Reproduce reported issues in the lab environment.
- Develop new test cases to verify and address identified problems.
- Create upgrade/downgrade test cases from old to new patch images.
- Participate in code review processes.
- Select regression test subsets to ensure fixes don't cause side effects.
- Execute selected test cases and review release notes.
- Conduct experiments to evaluate test effectiveness and improve test coverage.

### Importance and Objectives:


- **Customer-Centric Approach:** Interacting with customers helps in understanding their needs and
real-world challenges, enabling better issue reproduction and solution development.

- **Quality Assurance:** The goal is to maintain and improve software quality post-release, ensuring that
patches and updates don't introduce new issues or affect existing functionalities negatively.

- **Continuous Improvement:** Experimentation and review processes aim to enhance the effectiveness of
testing strategies, contributing to ongoing improvements in the software maintenance process.

The sustaining phase is crucial in ensuring customer satisfaction, maintaining software quality, and adapting
the product to evolving needs and environments. It involves a proactive approach to address reported issues
swiftly and efficiently while continually enhancing the software's capabilities.

47)Write about Test Execution Metric with Examples

1. **Planned vs. Actual Execution (PAE) Rate:** Comparing the planned number of test cases against the
actual executed cases on a weekly basis.

2. **Estimating Execution Challenges:** Noting the initial performance challenges and unforeseen factors
like delayed stress and load tests due to late fixes.
3. **Execution Status of Test (EST) Cases:** Enumerating the status of executed test cases in categories
like passed, failed, invalid, blocked, and untested.

4. **Decision-Making on Test Cycle Abandonment:** Evaluating whether to abandon the test cycle based
on predefined criteria and actual execution figures.

5. **Decision Rationale:** Justifying the decision to continue the cycle despite not meeting planned test
execution numbers.

6. **Overall Observations and Lessons:** Reflecting on the learnings from the challenges faced during the
execution cycle.

These metrics allow teams to track progress, identify challenges, and make informed decisions during the
test execution process.

Let's consider a hypothetical project, "RocketBoost," where 800 test cases are designated for execution
within a 10-week test cycle. Here's a breakdown of the test execution metrics and scenarios:

### Planned vs. Actual Execution (PAE) Rate:


- **Projected Execution:**
- Week 1: Plan to execute 40 test cases.
- Week 2: Aim for 80 test cases executed cumulatively.
- By Week 4: Target 200 test cases executed.

- **Actual Execution:**
- Week 1: Execute 35 test cases.
- Week 2: 70 test cases were completed.
- Week 4: Achieved 180 test cases executed.

### Estimating Execution Challenges:


- **Initial Performance:** The execution rate slightly falls short in the first few weeks due to
familiarization with the complex system.
- **Unforeseen Factors:** Delay in executing stress and load tests due to late fixes affecting memory leaks
emerged, prompting a two-week extension.

### Execution Status of Test (EST) Cases:


Suppose in Week 4:
- **Passed:** 120
- **Failed:** 25
- **Invalid:** 10
- **Blocked:** 5
- **Untested:** 40

### Decision-Making on Test Cycle Abandonment:


- **Revert Criteria:** Considering the predefined criteria to abandon the test cycle:
- Planned: 320 test cases by Week 4.
- Actual: Only 250 test cases completed due to various factors.
- **Factors Influencing Decision:**
- Insufficient test case execution against the plan.
- Some test cases initially marked as failed due to procedural errors were later rectified and passed.

### Decision Rationale:


- **Justification for Not Abandoning:**
- Partial execution was attributed to initial setbacks and procedural rectifications.
- The team believed the test suite's critical aspects were adequately covered.
- Rectified test cases were deemed valid, contributing to overall coverage.

### Overall Observations and Lessons:


- **Learning from Challenges:**
- Early familiarization and resolving procedural issues are critical.
- Late fixes affecting crucial tests can impact the cycle timeline.
- Despite setbacks, critical test aspects must be ensured before cycle termination.

This hypothetical example demonstrates how discrepancies between planned and actual test execution rates,
coupled with procedural rectifications, influence decisions about abandoning or continuing a test cycle.
Such evaluations guide future planning and emphasize early preparedness to mitigate execution delays.

48)Discuss the various Metrics needed for Monitoring Test Execution?


Certainly! Here's a detailed explanation of the defect monitoring metrics mentioned:

1. **Function as Designed (FAD) Count:** This metric evaluates the understanding level of test engineers
about the system. FADs represent reported defects that are not actual system issues but stem from a
misunderstanding. If the FAD count exceeds a certain threshold (e.g., 10%), it signals inadequate system
understanding. Lower FAD counts indicate better comprehension.

2. **Irreproducible Defects (IRD) Count:** IRDs are defects that can't be replicated or consistently
observed after initial reporting. Reproducibility is crucial for developers to understand and fix issues. High
IRD counts may imply communication or documentation gaps between testers and developers.

3. **Defects Arrival Rate (DAR) Count:** DAR measures the rate at which defect reports come in from
various sources (system testing, software development, SIT groups, etc.). This metric tracks how defects
accumulate during testing and from different contributors, aiding in resource allocation and issue
prioritization.

4. **Defects Rejected Rate (DRR) Count:** DRR assesses the rate at which reported defects are rejected
after attempted fixes. It reflects the complexity or validity of the reported issues. A high rejection rate may
indicate unclear defect reports or challenges in fixing those issues.

5. **Defects Closed Rate (DCR) Count:** DCR tracks the percentage of reported defects that undergo
successful resolution and subsequent verification by the testing team. It measures the efficiency of verifying
fixed issues, demonstrating the effectiveness of the resolution process.
6. **Outstanding Defects (OD) Count:** OD represents the number of unresolved defects at a given time.
These are reported issues that remain open and yet to be addressed. Tracking OD helps in prioritizing and
managing ongoing issues.

7. **Arrival and Resolution of Defects (ARD) Count:** ARD compares the rate of new defects being found
against the rate at which existing defects are resolved. It gives insights into the efficiency of the defect
identification and resolution process over time.

These metrics are valuable in understanding the quality of the product, the efficiency of defect resolution,
and the collaboration between different teams involved in the testing and development process. Tracking
them helps in proactive defect management and continuous improvement in the software development
lifecycle.

49)Explain in detail about First Customer Shipment

First Customer Shipment


The exit criterion of the final test cycle must be satisfied before the FCS which is established in the test
execution strategy section. An FCS readiness review meeting is called to ensure that the product meets the
shipment criteria.

The shipment criteria are more than just the exit criteria of the final test cycle. This review should include
representatives from the key function groups responsible for delivery and support of the product, such as
engineering, operation, quality, customer support, and product marketing. A set of generic FCS readiness
criteria is as follows:

All the test cases from the test suite should have been executed. If any of the test cases is unexecuted, then
an explanation for not executing the test case should have been provided in the test factory database.

Test case results are updated in the test factory database with passed, failed, blocked, or invalid status.
Usually, this is done during the system test cycles.

The requirement database is updated by moving each requirement from the verification state to either the
closed or the decline state, as discussed in Chapter 11, so that compliance statistics can be generated from
the database. All the issues related to the EC must be resolved with the development group.
The pass rate of test cases is very high, say, 98%.

No crash in the past two weeks of testing has been observed.


No known defect with critical or high severity exists in the product.
Not more than a certain number of known defects with medium and low levels of severity exist in the
product.
The threshold number may be determined by the software project team members.
All the identified FCS blocker defects are in the closed state.
All the resolved defects must be in the closed state.
All the outstanding defects that are still in either the open or assigned state are included in the release note
along with the workaround, if there is one.
The user guides are in place.
A troubleshooting guide is available.
The test report is completed and approved.
Details of a test report are explained in the following section.

Once again, three weeks before the FCS, the FCS blocker defects are identified at the cross-functional
project meeting. These defects are tracked on a daily basis to ensure that the defects are resolved and closed
before the FCS.

In the meantime, as new defects are submitted, these are evaluated to determine whether these are FCS
blockers. If it is concluded that a defect is a blocker, then the defect is included in the blocker list and
tracked along with the other defects in the list for its closure.

50)Illustrate the various Stages for Measuring Test Effectiveness


Measuring test effectiveness is crucial to ensure the reliability of a product post-deployment. One common
metric used is Defect Removal Efficiency (DRE), which assesses how effectively testing has managed to
identify defects before the product release. The formula for DRE is:

However, calculating defects not found during testing can be complex. To approximate this, one approach is
to count defects reported by customers within a specified period post-deployment (e.g., six months). But to
interpret DRE effectively, several considerations are essential:

1. **Test Environment Limitations:** Certain defects may escape even in rigorous testing due to limitations
in the test environment. Deciding whether to include these defects in calculations depends on whether the
aim is to measure effectiveness inclusive of environment limitations.

2. **Defect Classification:** Differentiate defects that need corrective maintenance from those requiring
adaptive or perfective maintenance. Exclude the latter from the calculation as they often represent requests
for new features rather than actual issues.

3. **Consistency in Duration:** Determine a consistent start and end point for defect counting across all
test projects, like from unit testing to system testing, to maintain consistency.

4. **Long-Term Trend:** DRE should be seen as part of a long-term trend in test effectiveness for the
organization, not just a one-time project measure.

**Fault Seeding Approach:** This method injects a small number of representative defects into the system
to estimate the number of escaped defects. The percentage of these seeded defects uncovered by sustaining
test engineers, who are unaware of the seeding, helps extrapolate the extent to which unknown defects
might have been found.
**Spoilage Metric:** Defects introduced at various stages of software development (requirements, design,
coding) are gradually removed through different testing phases (unit, integration, system, acceptance
testing).

By understanding these nuances, teams can accurately gauge the effectiveness of their testing efforts,
recognizing limitations and applying strategies like fault seeding to estimate escaped defects, thereby
improving product quality over time.

REGARDS

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy