0% found this document useful (0 votes)
53 views

STE Question Bank Chapter4

Uploaded by

pratik.ks1973
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views

STE Question Bank Chapter4

Uploaded by

pratik.ks1973
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

STE

“Chapter 4”
2MARKS QUESTION :
1. Which are different causes of software defect?

 Miscommunication of requirements
 Unrealistic deadlines
 Lack of design/coding experience
 Human errors
 Lack of version control
 Third-party tool issues
 Last-minute changes
 Poor testing skills.

2. Describe the requirement defect and coding defect in detail.

 Requirement Defects: Occur due to incomplete, incorrect, or ambiguous


requirements, leading to mismatches between what is needed and what is delivered.
 Coding Defects: Errors introduced during the coding phase, like missing comments,
wrong computations, incorrect data handling, or interface issues.

3. Explain techniques for finding defects.

 Static Techniques: Code reviews and inspections without executing the code.
 Dynamic Techniques: Running test cases to identify bugs.
 Operational Techniques: Defects found in production environments.

4. What do you mean by defect impact?


Defect impact measures the severity of a defect on the system. It is calculated using the
formula E = P * I, where P is the probability of the defect becoming a problem, and I is the
impact if the problem occurs.

4MARKS QUESTION:

1. Explain defect classification.


Defects can be classified into the following categories:

 Severity-wise: This classification is based on the severity of the defect's impact on the
system:
o Major: A defect causing a significant departure from requirements or failure
in functionality.
o Minor: A defect that doesn't significantly impact the product's execution.
o Fatal: A defect that causes the system to crash or other applications to
malfunction.
 Work product-wise: Defects arising from specific project deliverables:
o System Study Document (SSD): Defects found in the initial system analysis.
o Functional Specification Document (FSD): Defects in the functional design.
o Source Code: Errors in the actual code implementation.
 Type of Errors-wise: Defects based on error type:
o Comments Error: Inadequate or misleading comments in the code.
o Computation Error: Incorrect calculation or logic errors.
o Data Error: Incorrect data population or retrieval in the database.
o Interface Error: Issues with internal or external interfaces between modules.

2. What is defect management? Give defect classification in detail.

 Defect Management: This refers to the systematic process of identifying, logging,


tracking, and resolving defects throughout the software development lifecycle. The
goal is to reduce defects early and minimize the cost of fixing them.
 Defect Classification in Detail:
o Severity-wise: Major (affects functionality), Minor (non-critical issues), and
Fatal (crashes).
o Work Product-wise: Includes defects in documents such as system study,
functional specifications, architectural designs, and source code.
o Type of Error-wise: Includes various categories like comments errors (poor
documentation), computational errors (incorrect formulas), data errors (wrong
data manipulation), and more.
3. Illustrate the defect prevention process of defect fixing process with a diagram.
The defect prevention process includes:

 Software Requirements Analysis: Ensures that defects during the requirements


phase are minimized through proper analysis and validation.
 Reviews: Both self-reviews and peer reviews help catch defects early in the process
before formal testing.
 Defect Logging and Documentation: Defects are systematically logged and tracked.
 Root Cause Analysis: Identifying the root cause of defects to prevent them from
recurring.
 Embedding Procedures into the Development Process: Procedures are built into
the workflow to help avoid defects.

A diagram illustrating these steps would show a loop of prevention activities, including
planning, analyzing, reviewing, and improving processes.

4. Explain defect life cycle to identify the status of a defect with a proper labeled
diagram.
The defect life cycle outlines the stages a defect goes through from discovery to closure:

 New: A defect is logged for the first time.


 Open: The defect is acknowledged by the development team.
 Assigned: The defect is assigned to a developer to be fixed.
 Test/Retest: After fixing, the defect is sent to the testing team for verification.
 Deferred: If a defect is low-priority or can be fixed in future releases, it is deferred.
 Rejected: The defect is rejected if it's not valid.
 Verified: If the tester confirms the defect is fixed, it is marked as verified.
 Closed: Once verified, the defect is marked closed.
 Reopened: If the defect reappears after it was supposedly fixed, it is reopened.
5. Explain the defect template with its attributes.
The defect report template is a structured format to document defects. Its key attributes
include:

 ID: A unique identifier for the defect.


 Project/Product: The name of the project or product in which the defect is found.
 Release Version: The version of the product where the defect was detected.
 Summary: A brief description of the defect.
 Description: A detailed explanation of the defect.
 Steps to Replicate: A step-by-step guide to reproduce the defect.
 Actual vs Expected Results: The observed result vs what was expected.
 Attachments: Screenshots or logs to support the defect.
 Defect Severity: How severe the defect is.
 Status: The current state of the defect (e.g., Open, Closed).

6. What are the different points to be noted in reporting a defect?


When reporting a defect, the following points are crucial:

 Be specific: Ensure the defect description is clear and to the point.


 Be detailed: Provide all necessary details to reproduce the defect.
 Be objective: Avoid making assumptions; stick to facts.
 Steps to reproduce: Include exact steps to reproduce the defect.
 Review the report: Check for accuracy before submitting the defect report.

“Chapter 5”
2MARKS QUESTION :
1. Describe any two limitations of testing:

 Time-Consuming: Manual testing requires significant time and resources, especially


for large applications.
 Less Accuracy: Human error can occur, leading to inaccurate test results.

2. What is automated testing?


Automated testing uses tools and scripts to automatically execute tests and compare actual
outcomes with expected ones. It speeds up the process and reduces manual effort.

3. Explain the need for automation testing:

 Reduces time: Automated tests are faster and can run repeatedly.
 Improves accuracy: Eliminates human error and provides consistent results.
 Efficient testing: Allows for complex tests to be run more frequently.

4. Explain static and dynamic testing tools in detail:

 Static Testing Tools: Analyze code without executing it (e.g., flow analyzers,
coverage analyzers) to ensure correctness.
 Dynamic Testing Tools: Execute code and test its behavior with live data (e.g., test
drivers, emulators).

5. What do you mean by software metrics? List any three types of metrics:
Software metrics are measurements used to assess the quality and performance of software
processes and products.
Three types of metrics:

 Product Metrics: Measure the characteristics of the software product.


 Process Metrics: Evaluate the development process.
 Progress Metrics: Track the progress of different project activities.

6. Define metrics and measurement:

 Metrics: Quantitative measures of software attributes (e.g., performance, quality).


 Measurement: The process of collecting and analyzing metrics to monitor and
improve software processes.

4MARKS QUESTION:

1. Describe any four limitations of testing:

 Time-Consuming: Manual testing requires significant time and effort, especially for
large and complex applications. Covering all possible test cases takes a long time,
making it inefficient for frequent releases.
 Less Accuracy: Since manual testing involves human intervention, there's a higher
risk of errors or overlooking bugs. This can lead to unreliable results, especially when
running repetitive tests.
 Impractical for Performance Testing: Manually simulating the behavior of
thousands of users or testing for performance under heavy loads is nearly impossible.
Automation tools can handle such complex scenarios better.
 Repetitive and Tedious: Running the same tests repeatedly, such as in regression
testing, can become monotonous for testers and may lead to mistakes due to fatigue or
inattention.

2. State various advantages and disadvantages of using manual testing tools:

 Advantages:
o Human Insight: Manual testing allows testers to find issues related to user
experience and interface design, which automated tools might miss. Human
judgment is critical in identifying aesthetic and usability issues.
o Flexibility: Manual testing offers greater flexibility and adaptability in
handling unplanned changes or conducting exploratory testing without
predefined test scripts.
o Simplicity: Manual testing doesn't require special technical knowledge or
tools for basic tests, making it suitable for smaller projects or early-stage
development.
 Disadvantages:
o Time-Intensive: Manual testing for large and complex systems takes
significantly more time than automated testing, especially for repetitive tasks
like regression testing.
o Prone to Errors: Human testers are more likely to make mistakes or miss
issues, especially when performing repetitive tasks.
o Limited Scalability: Manual testing becomes impractical when testing for
performance, load, or handling massive amounts of data. Automation is better
suited for such large-scale scenarios.
3. Differentiate between manual testing and automation testing:

Manual Testing Automation Testing


Performed manually by human testers. Uses automated tools and scripts to execute
tests.
Slower as each test is executed manually. Much faster, as tests are automated and can
run quickly.
Prone to human error. More accurate due to automation and lack of
manual intervention.
Tedious and time-consuming to repeat the Ideal for repetitive tasks like regression
same tests. testing.
Lower upfront cost but expensive over time Higher initial cost for tools, but cost-
due to manual effort. effective in the long run.
Limited by the time and effort of testers. Allows for more comprehensive test
coverage in less time.
Best for exploratory, usability, and ad-hoc Best for regression, performance, load, and
testing. data-driven testing.
Can quickly adapt to changes or unplanned Requires predefined scripts; less flexible for
testing scenarios. on-the-fly changes.
Better at catching user experience issues Lacks the ability to judge user experience or
and visual bugs. design quality.
No maintenance required, but prone to Requires maintenance of test scripts after
inconsistency. changes in the software.

4. Explain the need for automation testing:


Automation testing is essential because:

 Saves Time: Automation allows repetitive tests like regression tests to be executed
quickly, saving a significant amount of time compared to manual testing.
 Improves Accuracy: Automated tests eliminate the risk of human errors that can
occur during manual testing, ensuring that the same tests are performed consistently.
 Increases Test Coverage: Automation enables the execution of large and complex
test suites, allowing more extensive coverage of different scenarios within a shorter
period.
 Enhances Productivity: Automation frees up testers to focus on more complex or
exploratory testing tasks by handling repetitive and routine tests automatically. It
allows testing to be done even during off-hours, ensuring faster feedback loops.

5. Elaborate the advantages of using test automation tools:

 Increased Efficiency: Automated tools execute tests much faster than manual testing.
They can run a large number of tests quickly and repeatedly, which improves overall
testing efficiency.
 Improved Consistency: Automation ensures that tests are executed the same way
each time, reducing variability in test execution and ensuring consistent results.
 Long-term Cost Savings: While automation tools may have a higher upfront cost for
setup, they save money in the long run by reducing manual labor and allowing for
faster test execution.
 Handles Complex Scenarios: Automated tools can handle complex testing scenarios,
such as performance or load testing, which are difficult or impossible to achieve
manually. They can simulate thousands of virtual users and execute tests across
multiple environments simultaneously.

6. Define metrics and measurement. Explain the need for software measurement:

 Metrics: Metrics are quantitative measurements used to evaluate the performance,


quality, or progress of a software project. They provide objective data to assess
whether a process or product is performing as expected.
 Measurement: Measurement is the process of collecting and analyzing data to
monitor software quality and processes.

Need for Software Measurement:

 Better Understanding: Metrics provide insight into how well a project is


progressing, helping stakeholders and team members better understand the
development process and make informed decisions.
 Process Control: Software measurement allows for better control over the
development process by identifying potential issues early on and taking corrective
actions before they become major problems.
 Continuous Improvement: By collecting data on past performance, teams can
identify areas where they need to improve, helping to refine processes and enhance
the quality of future projects.

7. State the different metrics types with their classification:

 Product Metrics: These metrics measure the characteristics of the software product
itself, such as its size, complexity, functionality, or performance.
 Process Metrics: These metrics focus on the effectiveness and efficiency of the
software development process, such as the time taken to complete tasks or the number
of defects detected during each phase.
 Progress Metrics: These metrics track the progress of the project, such as the number
of tests completed, the number of bugs fixed, or the percentage of tasks completed.

8. Explain three types of product metrics:


 Project Metrics: These metrics track how well the project is planned and executed.
Examples include schedule adherence, resources used, or project milestones
completed.
 Progress Metrics: These metrics track the advancement of different project activities,
such as the number of tasks completed, the number of defects resolved, or the
percentage of testing coverage achieved.
 Productivity Metrics: These metrics focus on measuring productivity, such as the
number of lines of code written, the number of features delivered, or the amount of
work done in a given timeframe.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy